Test Report: Docker_Linux_crio 22301

                    
                      c84bfbde0fd66fb1774332f176e8277185e66d9f:2025-12-25:42985
                    
                

Test fail (26/419)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable volcano --alsologtostderr -v=1: exit status 11 (237.730653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:29:39.875582   18816 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:29:39.875862   18816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:39.875870   18816 out.go:374] Setting ErrFile to fd 2...
	I1225 18:29:39.875875   18816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:39.876098   18816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:29:39.876355   18816 mustload.go:66] Loading cluster: addons-335994
	I1225 18:29:39.876660   18816 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:39.876677   18816 addons.go:622] checking whether the cluster is paused
	I1225 18:29:39.876753   18816 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:39.876768   18816 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:29:39.877127   18816 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:29:39.895759   18816 ssh_runner.go:195] Run: systemctl --version
	I1225 18:29:39.895811   18816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:29:39.913021   18816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:29:40.002703   18816 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:29:40.002800   18816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:29:40.032176   18816 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:29:40.032228   18816 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:29:40.032236   18816 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:29:40.032242   18816 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:29:40.032245   18816 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:29:40.032249   18816 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:29:40.032254   18816 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:29:40.032259   18816 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:29:40.032267   18816 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:29:40.032284   18816 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:29:40.032292   18816 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:29:40.032297   18816 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:29:40.032304   18816 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:29:40.032309   18816 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:29:40.032317   18816 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:29:40.032331   18816 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:29:40.032336   18816 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:29:40.032341   18816 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:29:40.032346   18816 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:29:40.032355   18816 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:29:40.032360   18816 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:29:40.032367   18816 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:29:40.032372   18816 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:29:40.032379   18816 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:29:40.032383   18816 cri.go:96] found id: ""
	I1225 18:29:40.032442   18816 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:29:40.046813   18816 out.go:203] 
	W1225 18:29:40.047887   18816 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:29:40.047918   18816 out.go:285] * 
	* 
	W1225 18:29:40.048576   18816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:29:40.049638   18816 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.157293ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002626124s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00342739s
addons_test.go:394: (dbg) Run:  kubectl --context addons-335994 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-335994 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-335994 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.179151868s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable registry --alsologtostderr -v=1: exit status 11 (247.671201ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:02.276455   21337 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:02.276714   21337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:02.276723   21337 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:02.276727   21337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:02.276953   21337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:02.277235   21337 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:02.277524   21337 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:02.277536   21337 addons.go:622] checking whether the cluster is paused
	I1225 18:30:02.277615   21337 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:02.277630   21337 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:02.277998   21337 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:02.298527   21337 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:02.298580   21337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:02.316505   21337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:02.411218   21337 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:02.411309   21337 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:02.443490   21337 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:02.443513   21337 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:02.443518   21337 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:02.443523   21337 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:02.443528   21337 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:02.443542   21337 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:02.443546   21337 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:02.443551   21337 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:02.443555   21337 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:02.443563   21337 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:02.443568   21337 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:02.443575   21337 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:02.443580   21337 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:02.443598   21337 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:02.443605   21337 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:02.443619   21337 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:02.443625   21337 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:02.443630   21337 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:02.443635   21337 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:02.443640   21337 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:02.443645   21337 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:02.443649   21337 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:02.443654   21337 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:02.443664   21337 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:02.443668   21337 cri.go:96] found id: ""
	I1225 18:30:02.443715   21337 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:02.458853   21337 out.go:203] 
	W1225 18:30:02.460285   21337 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:02.460304   21337 out.go:285] * 
	* 
	W1225 18:30:02.461078   21337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:02.462257   21337 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.64s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.090474ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-335994
2025/12/25 18:30:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:334: (dbg) Run:  kubectl --context addons-335994 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (239.022244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:02.392324   21383 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:02.392601   21383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:02.392609   21383 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:02.392614   21383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:02.392779   21383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:02.393085   21383 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:02.393369   21383 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:02.393380   21383 addons.go:622] checking whether the cluster is paused
	I1225 18:30:02.393455   21383 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:02.393470   21383 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:02.393825   21383 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:02.413398   21383 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:02.413453   21383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:02.431615   21383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:02.525636   21383 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:02.525713   21383 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:02.556050   21383 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:02.556097   21383 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:02.556103   21383 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:02.556109   21383 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:02.556113   21383 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:02.556119   21383 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:02.556135   21383 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:02.556143   21383 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:02.556147   21383 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:02.556157   21383 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:02.556165   21383 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:02.556169   21383 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:02.556177   21383 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:02.556180   21383 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:02.556186   21383 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:02.556194   21383 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:02.556197   21383 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:02.556202   21383 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:02.556207   21383 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:02.556210   21383 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:02.556212   21383 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:02.556215   21383 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:02.556218   21383 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:02.556221   21383 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:02.556227   21383 cri.go:96] found id: ""
	I1225 18:30:02.556268   21383 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:02.569796   21383 out.go:203] 
	W1225 18:30:02.571063   21383 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:02.571083   21383 out.go:285] * 
	* 
	W1225 18:30:02.571824   21383 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:02.572993   21383 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-335994 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-335994 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-335994 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [585d0d4e-408e-4281-9acc-0330cd2df0a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [585d0d4e-408e-4281-9acc-0330cd2df0a5] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002425272s
I1225 18:30:10.824752    9112 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-335994 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (244.344433ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:11.730730   22838 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:11.730921   22838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:11.730930   22838 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:11.730935   22838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:11.731102   22838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:11.731356   22838 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:11.731656   22838 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:11.731667   22838 addons.go:622] checking whether the cluster is paused
	I1225 18:30:11.731743   22838 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:11.731759   22838 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:11.732168   22838 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:11.750336   22838 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:11.750386   22838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:11.767679   22838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:11.859330   22838 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:11.859412   22838 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:11.892814   22838 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:11.892844   22838 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:11.892852   22838 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:11.892865   22838 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:11.892871   22838 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:11.892877   22838 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:11.892883   22838 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:11.892888   22838 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:11.892917   22838 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:11.892940   22838 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:11.892949   22838 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:11.892956   22838 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:11.892965   22838 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:11.892971   22838 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:11.892980   22838 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:11.892987   22838 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:11.892992   22838 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:11.893000   22838 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:11.893009   22838 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:11.893015   22838 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:11.893028   22838 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:11.893037   22838 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:11.893044   22838 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:11.893052   22838 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:11.893058   22838 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:11.893066   22838 cri.go:96] found id: ""
	I1225 18:30:11.893146   22838 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:11.911297   22838 out.go:203] 
	W1225 18:30:11.912943   22838 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:11.912985   22838 out.go:285] * 
	* 
	W1225 18:30:11.914051   22838 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:11.915250   22838 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable ingress --alsologtostderr -v=1: exit status 11 (241.093768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:11.978847   22920 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:11.979169   22920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:11.979179   22920 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:11.979184   22920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:11.979409   22920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:11.979739   22920 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:11.980130   22920 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:11.980149   22920 addons.go:622] checking whether the cluster is paused
	I1225 18:30:11.980249   22920 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:11.980274   22920 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:11.980640   22920 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:11.998838   22920 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:11.998937   22920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:12.019643   22920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:12.109471   22920 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:12.109551   22920 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:12.139977   22920 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:12.140006   22920 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:12.140011   22920 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:12.140014   22920 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:12.140017   22920 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:12.140021   22920 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:12.140024   22920 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:12.140027   22920 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:12.140030   22920 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:12.140051   22920 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:12.140057   22920 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:12.140063   22920 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:12.140068   22920 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:12.140078   22920 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:12.140088   22920 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:12.140097   22920 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:12.140102   22920 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:12.140115   22920 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:12.140121   22920 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:12.140124   22920 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:12.140130   22920 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:12.140133   22920 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:12.140136   22920 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:12.140142   22920 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:12.140151   22920 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:12.140158   22920 cri.go:96] found id: ""
	I1225 18:30:12.140222   22920 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:12.154038   22920 out.go:203] 
	W1225 18:30:12.155089   22920 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:12.155106   22920 out.go:285] * 
	* 
	W1225 18:30:12.155781   22920 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:12.156989   22920 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (12.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-8js4j" [6c7ebe28-2a3e-439a-905c-4b1934ab4e8d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010730574s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (240.522669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:01.977156   21170 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:01.977308   21170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:01.977318   21170 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:01.977322   21170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:01.977505   21170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:01.977762   21170 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:01.978065   21170 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:01.978079   21170 addons.go:622] checking whether the cluster is paused
	I1225 18:30:01.978161   21170 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:01.978184   21170 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:01.978546   21170 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:01.995842   21170 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:01.995912   21170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:02.013595   21170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:02.106678   21170 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:02.106777   21170 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:02.138541   21170 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:02.138558   21170 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:02.138562   21170 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:02.138566   21170 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:02.138569   21170 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:02.138572   21170 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:02.138575   21170 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:02.138578   21170 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:02.138581   21170 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:02.138588   21170 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:02.138592   21170 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:02.138596   21170 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:02.138601   21170 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:02.138605   21170 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:02.138611   21170 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:02.138623   21170 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:02.138627   21170 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:02.138632   21170 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:02.138635   21170 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:02.138638   21170 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:02.138644   21170 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:02.138652   21170 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:02.138656   21170 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:02.138660   21170 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:02.138668   21170 cri.go:96] found id: ""
	I1225 18:30:02.138709   21170 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:02.156029   21170 out.go:203] 
	W1225 18:30:02.157365   21170 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:02.157393   21170 out.go:285] * 
	* 
	W1225 18:30:02.158353   21170 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:02.159671   21170 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.113868ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003111681s
addons_test.go:465: (dbg) Run:  kubectl --context addons-335994 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (230.256269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:29:59.190790   20672 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:29:59.191097   20672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:59.191109   20672 out.go:374] Setting ErrFile to fd 2...
	I1225 18:29:59.191114   20672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:59.191310   20672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:29:59.191750   20672 mustload.go:66] Loading cluster: addons-335994
	I1225 18:29:59.193088   20672 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:59.193115   20672 addons.go:622] checking whether the cluster is paused
	I1225 18:29:59.193243   20672 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:59.193265   20672 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:29:59.193615   20672 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:29:59.211036   20672 ssh_runner.go:195] Run: systemctl --version
	I1225 18:29:59.211113   20672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:29:59.230083   20672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:29:59.320008   20672 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:29:59.320073   20672 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:29:59.348405   20672 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:29:59.348437   20672 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:29:59.348441   20672 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:29:59.348445   20672 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:29:59.348448   20672 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:29:59.348451   20672 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:29:59.348454   20672 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:29:59.348457   20672 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:29:59.348460   20672 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:29:59.348465   20672 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:29:59.348468   20672 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:29:59.348470   20672 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:29:59.348473   20672 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:29:59.348476   20672 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:29:59.348479   20672 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:29:59.348485   20672 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:29:59.348488   20672 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:29:59.348492   20672 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:29:59.348495   20672 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:29:59.348498   20672 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:29:59.348501   20672 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:29:59.348503   20672 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:29:59.348506   20672 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:29:59.348509   20672 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:29:59.348512   20672 cri.go:96] found id: ""
	I1225 18:29:59.348585   20672 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:29:59.362049   20672 out.go:203] 
	W1225 18:29:59.363281   20672 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:29:59.363300   20672 out.go:285] * 
	* 
	W1225 18:29:59.364035   20672 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:29:59.365337   20672 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1225 18:29:51.372332    9112 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1225 18:29:51.375574    9112 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1225 18:29:51.375594    9112 kapi.go:107] duration metric: took 3.279898ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.287289ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-335994 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-335994 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [e82c77f3-b8c8-4f97-8055-699ba57873b4] Pending
helpers_test.go:353: "task-pv-pod" [e82c77f3-b8c8-4f97-8055-699ba57873b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [e82c77f3-b8c8-4f97-8055-699ba57873b4] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003188131s
addons_test.go:574: (dbg) Run:  kubectl --context addons-335994 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-335994 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-335994 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-335994 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-335994 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-335994 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-335994 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [0ba9d6c3-42ca-4bb3-8f5c-58abaa8348ac] Pending
helpers_test.go:353: "task-pv-pod-restore" [0ba9d6c3-42ca-4bb3-8f5c-58abaa8348ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [0ba9d6c3-42ca-4bb3-8f5c-58abaa8348ac] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.025949984s
addons_test.go:616: (dbg) Run:  kubectl --context addons-335994 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-335994 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-335994 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (229.588737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:37.865868   23935 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:37.866146   23935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:37.866161   23935 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:37.866165   23935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:37.866345   23935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:37.866601   23935 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:37.866886   23935 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:37.866914   23935 addons.go:622] checking whether the cluster is paused
	I1225 18:30:37.866998   23935 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:37.867014   23935 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:37.867397   23935 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:37.886336   23935 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:37.886411   23935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:37.903674   23935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:37.993571   23935 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:37.993632   23935 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:38.022016   23935 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:38.022049   23935 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:38.022055   23935 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:38.022061   23935 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:38.022066   23935 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:38.022070   23935 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:38.022074   23935 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:38.022079   23935 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:38.022083   23935 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:38.022090   23935 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:38.022094   23935 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:38.022099   23935 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:38.022105   23935 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:38.022110   23935 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:38.022114   23935 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:38.022125   23935 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:38.022129   23935 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:38.022137   23935 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:38.022144   23935 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:38.022149   23935 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:38.022158   23935 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:38.022163   23935 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:38.022170   23935 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:38.022176   23935 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:38.022181   23935 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:38.022192   23935 cri.go:96] found id: ""
	I1225 18:30:38.022246   23935 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:38.036391   23935 out.go:203] 
	W1225 18:30:38.037669   23935 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:38.037695   23935 out.go:285] * 
	* 
	W1225 18:30:38.038467   23935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:38.039602   23935 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (225.726937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:38.095715   24012 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:38.095890   24012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:38.095914   24012 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:38.095918   24012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:38.096092   24012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:38.096342   24012 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:38.096652   24012 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:38.096668   24012 addons.go:622] checking whether the cluster is paused
	I1225 18:30:38.096749   24012 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:38.096767   24012 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:38.097164   24012 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:38.114643   24012 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:38.114697   24012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:38.132827   24012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:38.221127   24012 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:38.221192   24012 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:38.248876   24012 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:38.248919   24012 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:38.248926   24012 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:38.248931   24012 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:38.248936   24012 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:38.248945   24012 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:38.248950   24012 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:38.248954   24012 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:38.248957   24012 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:38.248962   24012 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:38.248965   24012 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:38.248968   24012 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:38.248976   24012 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:38.248979   24012 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:38.248982   24012 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:38.248986   24012 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:38.248989   24012 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:38.248994   24012 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:38.248997   24012 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:38.248999   24012 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:38.249009   24012 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:38.249015   24012 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:38.249018   24012 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:38.249020   24012 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:38.249023   24012 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:38.249027   24012 cri.go:96] found id: ""
	I1225 18:30:38.249067   24012 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:38.262587   24012 out.go:203] 
	W1225 18:30:38.263821   24012 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:38.263838   24012 out.go:285] * 
	* 
	W1225 18:30:38.264556   24012 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:38.265800   24012 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-335994 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-335994 --alsologtostderr -v=1: exit status 11 (254.997975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:29:48.893028   19162 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:29:48.893163   19162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:48.893171   19162 out.go:374] Setting ErrFile to fd 2...
	I1225 18:29:48.893176   19162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:48.893367   19162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:29:48.893625   19162 mustload.go:66] Loading cluster: addons-335994
	I1225 18:29:48.893954   19162 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:48.893971   19162 addons.go:622] checking whether the cluster is paused
	I1225 18:29:48.894056   19162 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:48.894071   19162 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:29:48.894508   19162 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:29:48.914348   19162 ssh_runner.go:195] Run: systemctl --version
	I1225 18:29:48.914419   19162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:29:48.933433   19162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:29:49.025870   19162 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:29:49.025979   19162 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:29:49.055934   19162 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:29:49.055955   19162 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:29:49.055959   19162 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:29:49.055962   19162 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:29:49.055966   19162 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:29:49.055969   19162 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:29:49.055972   19162 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:29:49.055975   19162 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:29:49.055977   19162 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:29:49.055982   19162 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:29:49.055985   19162 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:29:49.055987   19162 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:29:49.055990   19162 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:29:49.055993   19162 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:29:49.055996   19162 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:29:49.056008   19162 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:29:49.056013   19162 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:29:49.056017   19162 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:29:49.056020   19162 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:29:49.056022   19162 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:29:49.056028   19162 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:29:49.056031   19162 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:29:49.056033   19162 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:29:49.056035   19162 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:29:49.056038   19162 cri.go:96] found id: ""
	I1225 18:29:49.056075   19162 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:29:49.073380   19162 out.go:203] 
	W1225 18:29:49.074725   19162 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:29:49.074751   19162 out.go:285] * 
	* 
	W1225 18:29:49.075568   19162 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:29:49.078928   19162 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-335994 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-335994
helpers_test.go:244: (dbg) docker inspect addons-335994:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a",
	        "Created": "2025-12-25T18:28:28.97177446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11536,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T18:28:29.012169333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a/hosts",
	        "LogPath": "/var/lib/docker/containers/85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a/85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a-json.log",
	        "Name": "/addons-335994",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-335994:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-335994",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "85e01fe9e0aff7835333b7d4fd41b708b5d42cb369ba9166ac5d4f8c05b43f4a",
	                "LowerDir": "/var/lib/docker/overlay2/c67ba7ecd8c1d7fa746c8dec15401a1730cc838f5c3961d702a84fa614347b94-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c67ba7ecd8c1d7fa746c8dec15401a1730cc838f5c3961d702a84fa614347b94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c67ba7ecd8c1d7fa746c8dec15401a1730cc838f5c3961d702a84fa614347b94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c67ba7ecd8c1d7fa746c8dec15401a1730cc838f5c3961d702a84fa614347b94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-335994",
	                "Source": "/var/lib/docker/volumes/addons-335994/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-335994",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-335994",
	                "name.minikube.sigs.k8s.io": "addons-335994",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1a22d72dc395dd207a2e50715808d732e87750f93ffd6387fd0842c6a1e4bc1e",
	            "SandboxKey": "/var/run/docker/netns/1a22d72dc395",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-335994": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4110ba66ea503682c0ae7569f93c757698589e4d649146c4301bf912822e0e2d",
	                    "EndpointID": "401265143c939c0688c8da0bcd392524a75f2823c36bd4ce41277625b559007c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "36:a4:42:bc:a8:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-335994",
	                        "85e01fe9e0af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-335994 -n addons-335994
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-335994 logs -n 25: (1.118383533s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-658134 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-658134   │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:27 UTC │
	│ delete  │ -p download-only-658134                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-658134   │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:27 UTC │
	│ start   │ -o=json --download-only -p download-only-964215 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-964215   │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:28 UTC │
	│ delete  │ -p download-only-964215                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-964215   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ start   │ -o=json --download-only -p download-only-904964 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-904964   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ delete  │ -p download-only-904964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-904964   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ delete  │ -p download-only-658134                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-658134   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ delete  │ -p download-only-964215                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-964215   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ delete  │ -p download-only-904964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-904964   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ start   │ --download-only -p download-docker-876757 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-876757 │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │                     │
	│ delete  │ -p download-docker-876757                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-876757 │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ start   │ --download-only -p binary-mirror-321939 --alsologtostderr --binary-mirror http://127.0.0.1:35961 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-321939   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │                     │
	│ delete  │ -p binary-mirror-321939                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-321939   │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ addons  │ disable dashboard -p addons-335994                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-335994          │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-335994                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-335994          │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │                     │
	│ start   │ -p addons-335994 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-335994          │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:29 UTC │
	│ addons  │ addons-335994 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-335994          │ jenkins │ v1.37.0 │ 25 Dec 25 18:29 UTC │                     │
	│ addons  │ addons-335994 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-335994          │ jenkins │ v1.37.0 │ 25 Dec 25 18:29 UTC │                     │
	│ addons  │ enable headlamp -p addons-335994 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-335994          │ jenkins │ v1.37.0 │ 25 Dec 25 18:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 18:28:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 18:28:05.142257   10884 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:28:05.142478   10884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:28:05.142487   10884 out.go:374] Setting ErrFile to fd 2...
	I1225 18:28:05.142491   10884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:28:05.142696   10884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:28:05.143171   10884 out.go:368] Setting JSON to false
	I1225 18:28:05.143885   10884 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":633,"bootTime":1766686652,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:28:05.143951   10884 start.go:143] virtualization: kvm guest
	I1225 18:28:05.145607   10884 out.go:179] * [addons-335994] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:28:05.146627   10884 notify.go:221] Checking for updates...
	I1225 18:28:05.146656   10884 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:28:05.147738   10884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:28:05.148800   10884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:28:05.149881   10884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:28:05.154078   10884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:28:05.155325   10884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:28:05.156631   10884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:28:05.179313   10884 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:28:05.179401   10884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:28:05.233423   10884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-25 18:28:05.224198601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:28:05.233537   10884 docker.go:319] overlay module found
	I1225 18:28:05.235789   10884 out.go:179] * Using the docker driver based on user configuration
	I1225 18:28:05.236908   10884 start.go:309] selected driver: docker
	I1225 18:28:05.236930   10884 start.go:928] validating driver "docker" against <nil>
	I1225 18:28:05.236944   10884 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:28:05.237704   10884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:28:05.295529   10884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-25 18:28:05.286692712 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:28:05.295668   10884 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 18:28:05.295859   10884 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 18:28:05.297536   10884 out.go:179] * Using Docker driver with root privileges
	I1225 18:28:05.298710   10884 cni.go:84] Creating CNI manager for ""
	I1225 18:28:05.298767   10884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 18:28:05.298778   10884 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 18:28:05.298835   10884 start.go:353] cluster config:
	{Name:addons-335994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-335994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1225 18:28:05.300235   10884 out.go:179] * Starting "addons-335994" primary control-plane node in "addons-335994" cluster
	I1225 18:28:05.301352   10884 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 18:28:05.302456   10884 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 18:28:05.303463   10884 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 18:28:05.303492   10884 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 18:28:05.303500   10884 cache.go:65] Caching tarball of preloaded images
	I1225 18:28:05.303557   10884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 18:28:05.303592   10884 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 18:28:05.303605   10884 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 18:28:05.303975   10884 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/config.json ...
	I1225 18:28:05.304010   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/config.json: {Name:mk3ef1921cdea8056c206c9eee6891910f6b5a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:05.319579   10884 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1225 18:28:05.319691   10884 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1225 18:28:05.319709   10884 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1225 18:28:05.319713   10884 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1225 18:28:05.319720   10884 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1225 18:28:05.319727   10884 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from local cache
	I1225 18:28:18.252748   10884 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from cached tarball
	I1225 18:28:18.252784   10884 cache.go:243] Successfully downloaded all kic artifacts
	I1225 18:28:18.252824   10884 start.go:360] acquireMachinesLock for addons-335994: {Name:mka9da3f87fbd2849cf0afe5adfbd8d011069a40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 18:28:18.252935   10884 start.go:364] duration metric: took 91.157µs to acquireMachinesLock for "addons-335994"
	I1225 18:28:18.252965   10884 start.go:93] Provisioning new machine with config: &{Name:addons-335994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-335994 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 18:28:18.253038   10884 start.go:125] createHost starting for "" (driver="docker")
	I1225 18:28:18.254503   10884 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1225 18:28:18.254754   10884 start.go:159] libmachine.API.Create for "addons-335994" (driver="docker")
	I1225 18:28:18.254789   10884 client.go:173] LocalClient.Create starting
	I1225 18:28:18.254874   10884 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 18:28:18.271777   10884 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 18:28:18.484784   10884 cli_runner.go:164] Run: docker network inspect addons-335994 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 18:28:18.502187   10884 cli_runner.go:211] docker network inspect addons-335994 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 18:28:18.502251   10884 network_create.go:284] running [docker network inspect addons-335994] to gather additional debugging logs...
	I1225 18:28:18.502267   10884 cli_runner.go:164] Run: docker network inspect addons-335994
	W1225 18:28:18.519456   10884 cli_runner.go:211] docker network inspect addons-335994 returned with exit code 1
	I1225 18:28:18.519485   10884 network_create.go:287] error running [docker network inspect addons-335994]: docker network inspect addons-335994: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-335994 not found
	I1225 18:28:18.519508   10884 network_create.go:289] output of [docker network inspect addons-335994]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-335994 not found
	
	** /stderr **
	I1225 18:28:18.519623   10884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 18:28:18.535859   10884 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f886d0}
	I1225 18:28:18.535928   10884 network_create.go:124] attempt to create docker network addons-335994 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1225 18:28:18.535993   10884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-335994 addons-335994
	I1225 18:28:18.581640   10884 network_create.go:108] docker network addons-335994 192.168.49.0/24 created
	I1225 18:28:18.581667   10884 kic.go:121] calculated static IP "192.168.49.2" for the "addons-335994" container
	I1225 18:28:18.581727   10884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 18:28:18.596868   10884 cli_runner.go:164] Run: docker volume create addons-335994 --label name.minikube.sigs.k8s.io=addons-335994 --label created_by.minikube.sigs.k8s.io=true
	I1225 18:28:18.613783   10884 oci.go:103] Successfully created a docker volume addons-335994
	I1225 18:28:18.613846   10884 cli_runner.go:164] Run: docker run --rm --name addons-335994-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-335994 --entrypoint /usr/bin/test -v addons-335994:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 18:28:25.078507   10884 cli_runner.go:217] Completed: docker run --rm --name addons-335994-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-335994 --entrypoint /usr/bin/test -v addons-335994:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (6.464609719s)
	I1225 18:28:25.078577   10884 oci.go:107] Successfully prepared a docker volume addons-335994
	I1225 18:28:25.078641   10884 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 18:28:25.078654   10884 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 18:28:25.078704   10884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-335994:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 18:28:28.899977   10884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-335994:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.82122655s)
	I1225 18:28:28.900007   10884 kic.go:203] duration metric: took 3.821349248s to extract preloaded images to volume ...
	W1225 18:28:28.900101   10884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 18:28:28.900131   10884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 18:28:28.900167   10884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 18:28:28.956217   10884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-335994 --name addons-335994 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-335994 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-335994 --network addons-335994 --ip 192.168.49.2 --volume addons-335994:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 18:28:29.241050   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Running}}
	I1225 18:28:29.259494   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:29.277398   10884 cli_runner.go:164] Run: docker exec addons-335994 stat /var/lib/dpkg/alternatives/iptables
	I1225 18:28:29.325537   10884 oci.go:144] the created container "addons-335994" has a running status.
	I1225 18:28:29.325588   10884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa...
	I1225 18:28:29.513391   10884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1225 18:28:29.544478   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:29.566880   10884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1225 18:28:29.566958   10884 kic_runner.go:114] Args: [docker exec --privileged addons-335994 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1225 18:28:29.612427   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:29.631843   10884 machine.go:94] provisionDockerMachine start ...
	I1225 18:28:29.631944   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:29.651191   10884 main.go:144] libmachine: Using SSH client type: native
	I1225 18:28:29.651415   10884 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1225 18:28:29.651427   10884 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 18:28:29.773395   10884 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-335994
	
	I1225 18:28:29.773420   10884 ubuntu.go:182] provisioning hostname "addons-335994"
	I1225 18:28:29.773467   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:29.792350   10884 main.go:144] libmachine: Using SSH client type: native
	I1225 18:28:29.792648   10884 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1225 18:28:29.792673   10884 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-335994 && echo "addons-335994" | sudo tee /etc/hostname
	I1225 18:28:29.922736   10884 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-335994
	
	I1225 18:28:29.922805   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:29.939623   10884 main.go:144] libmachine: Using SSH client type: native
	I1225 18:28:29.939854   10884 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1225 18:28:29.939880   10884 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-335994' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-335994/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-335994' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 18:28:30.061335   10884 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 18:28:30.061362   10884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 18:28:30.061390   10884 ubuntu.go:190] setting up certificates
	I1225 18:28:30.061399   10884 provision.go:84] configureAuth start
	I1225 18:28:30.061456   10884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-335994
	I1225 18:28:30.078624   10884 provision.go:143] copyHostCerts
	I1225 18:28:30.078694   10884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 18:28:30.078801   10884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 18:28:30.078861   10884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 18:28:30.078971   10884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.addons-335994 san=[127.0.0.1 192.168.49.2 addons-335994 localhost minikube]
	I1225 18:28:30.232291   10884 provision.go:177] copyRemoteCerts
	I1225 18:28:30.232349   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 18:28:30.232381   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:30.249349   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:30.338802   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 18:28:30.356551   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1225 18:28:30.373549   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 18:28:30.390422   10884 provision.go:87] duration metric: took 329.008791ms to configureAuth
	I1225 18:28:30.390465   10884 ubuntu.go:206] setting minikube options for container-runtime
	I1225 18:28:30.390645   10884 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:28:30.390779   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:30.408059   10884 main.go:144] libmachine: Using SSH client type: native
	I1225 18:28:30.408283   10884 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1225 18:28:30.408300   10884 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 18:28:30.662470   10884 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 18:28:30.662494   10884 machine.go:97] duration metric: took 1.030630047s to provisionDockerMachine
	I1225 18:28:30.662505   10884 client.go:176] duration metric: took 12.407709912s to LocalClient.Create
	I1225 18:28:30.662522   10884 start.go:167] duration metric: took 12.407776182s to libmachine.API.Create "addons-335994"
	I1225 18:28:30.662531   10884 start.go:293] postStartSetup for "addons-335994" (driver="docker")
	I1225 18:28:30.662543   10884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 18:28:30.662594   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 18:28:30.662629   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:30.680602   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:30.771163   10884 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 18:28:30.774349   10884 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 18:28:30.774378   10884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 18:28:30.774391   10884 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 18:28:30.774453   10884 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 18:28:30.774485   10884 start.go:296] duration metric: took 111.947705ms for postStartSetup
	I1225 18:28:30.774787   10884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-335994
	I1225 18:28:30.791635   10884 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/config.json ...
	I1225 18:28:30.791913   10884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 18:28:30.791969   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:30.809623   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:30.896369   10884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 18:28:30.900664   10884 start.go:128] duration metric: took 12.647609676s to createHost
	I1225 18:28:30.900698   10884 start.go:83] releasing machines lock for "addons-335994", held for 12.647740039s
	I1225 18:28:30.900772   10884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-335994
	I1225 18:28:30.918083   10884 ssh_runner.go:195] Run: cat /version.json
	I1225 18:28:30.918124   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:30.918169   10884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 18:28:30.918232   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:30.935576   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:30.935945   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:31.072688   10884 ssh_runner.go:195] Run: systemctl --version
	I1225 18:28:31.078589   10884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 18:28:31.110478   10884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 18:28:31.114820   10884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 18:28:31.114883   10884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 18:28:31.138771   10884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 18:28:31.138792   10884 start.go:496] detecting cgroup driver to use...
	I1225 18:28:31.138825   10884 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 18:28:31.138866   10884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 18:28:31.153483   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 18:28:31.164971   10884 docker.go:218] disabling cri-docker service (if available) ...
	I1225 18:28:31.165016   10884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 18:28:31.179986   10884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 18:28:31.196077   10884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 18:28:31.274963   10884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 18:28:31.359706   10884 docker.go:234] disabling docker service ...
	I1225 18:28:31.359766   10884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 18:28:31.376805   10884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 18:28:31.388516   10884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 18:28:31.469616   10884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 18:28:31.551871   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 18:28:31.564181   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 18:28:31.577308   10884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 18:28:31.577372   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.587194   10884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 18:28:31.587251   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.595553   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.603421   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.611490   10884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 18:28:31.619073   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.626808   10884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.639602   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:28:31.647545   10884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 18:28:31.654371   10884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1225 18:28:31.654416   10884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1225 18:28:31.665441   10884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 18:28:31.672170   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 18:28:31.748393   10884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 18:28:31.870020   10884 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 18:28:31.870091   10884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 18:28:31.873722   10884 start.go:574] Will wait 60s for crictl version
	I1225 18:28:31.873790   10884 ssh_runner.go:195] Run: which crictl
	I1225 18:28:31.877137   10884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 18:28:31.902500   10884 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 18:28:31.902627   10884 ssh_runner.go:195] Run: crio --version
	I1225 18:28:31.929089   10884 ssh_runner.go:195] Run: crio --version
	I1225 18:28:31.958033   10884 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1225 18:28:31.959119   10884 cli_runner.go:164] Run: docker network inspect addons-335994 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 18:28:31.975468   10884 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1225 18:28:31.979216   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 18:28:31.988752   10884 kubeadm.go:884] updating cluster {Name:addons-335994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-335994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 18:28:31.988867   10884 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 18:28:31.988965   10884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 18:28:32.018369   10884 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 18:28:32.018398   10884 crio.go:433] Images already preloaded, skipping extraction
	I1225 18:28:32.018461   10884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 18:28:32.042223   10884 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 18:28:32.042243   10884 cache_images.go:86] Images are preloaded, skipping loading
	I1225 18:28:32.042251   10884 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1225 18:28:32.042323   10884 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-335994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-335994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 18:28:32.042377   10884 ssh_runner.go:195] Run: crio config
	I1225 18:28:32.083965   10884 cni.go:84] Creating CNI manager for ""
	I1225 18:28:32.083986   10884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 18:28:32.084001   10884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 18:28:32.084023   10884 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-335994 NodeName:addons-335994 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 18:28:32.084138   10884 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-335994"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 18:28:32.084193   10884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 18:28:32.091816   10884 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 18:28:32.091867   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 18:28:32.099109   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1225 18:28:32.111025   10884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 18:28:32.125096   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1225 18:28:32.137112   10884 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1225 18:28:32.140388   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 18:28:32.149829   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 18:28:32.228011   10884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 18:28:32.251190   10884 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994 for IP: 192.168.49.2
	I1225 18:28:32.251225   10884 certs.go:195] generating shared ca certs ...
	I1225 18:28:32.251247   10884 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.251373   10884 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 18:28:32.365864   10884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt ...
	I1225 18:28:32.365901   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt: {Name:mkcb9375e0688191ed03a8de242e8c4c6cc02607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.366069   10884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key ...
	I1225 18:28:32.366080   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key: {Name:mk14ff302fd657263231b124312d8b63abc2d3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.366151   10884 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 18:28:32.386327   10884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt ...
	I1225 18:28:32.386347   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt: {Name:mk37ff431f0df23585adcaa911742ff0bcb09b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.386457   10884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key ...
	I1225 18:28:32.386467   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key: {Name:mkaebdc9aeae08505919611f46d4cb75cfaf8b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.386534   10884 certs.go:257] generating profile certs ...
	I1225 18:28:32.386587   10884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.key
	I1225 18:28:32.386600   10884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt with IP's: []
	I1225 18:28:32.473104   10884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt ...
	I1225 18:28:32.473132   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: {Name:mk115a03b3dfe85f91b34745820d2811d391fff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.473290   10884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.key ...
	I1225 18:28:32.473301   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.key: {Name:mk9794f689dc16e588c216ee89a7c3795e540702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.473372   10884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.key.3cbb37aa
	I1225 18:28:32.473390   10884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.crt.3cbb37aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1225 18:28:32.573003   10884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.crt.3cbb37aa ...
	I1225 18:28:32.573037   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.crt.3cbb37aa: {Name:mk9a68b30578adb6e4bc3233c01cfba81bbc5cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.573218   10884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.key.3cbb37aa ...
	I1225 18:28:32.573232   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.key.3cbb37aa: {Name:mk7a895a1343c44bfcf10d550eb69fd9ee4ce9f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.573316   10884 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.crt.3cbb37aa -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.crt
	I1225 18:28:32.573402   10884 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.key.3cbb37aa -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.key
	I1225 18:28:32.573457   10884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.key
	I1225 18:28:32.573477   10884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.crt with IP's: []
	I1225 18:28:32.644186   10884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.crt ...
	I1225 18:28:32.644221   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.crt: {Name:mk6305f9a0f626e35798bc1efd2385077fab2889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.644403   10884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.key ...
	I1225 18:28:32.644417   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.key: {Name:mk427eb93af8ac65075af136ea7ff7580f2a8ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:32.644606   10884 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 18:28:32.644645   10884 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 18:28:32.644676   10884 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 18:28:32.644704   10884 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 18:28:32.645297   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 18:28:32.663101   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 18:28:32.680177   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 18:28:32.696809   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 18:28:32.713843   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1225 18:28:32.730634   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 18:28:32.747552   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 18:28:32.764030   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 18:28:32.780754   10884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 18:28:32.800295   10884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 18:28:32.812597   10884 ssh_runner.go:195] Run: openssl version
	I1225 18:28:32.818370   10884 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 18:28:32.825831   10884 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 18:28:32.835644   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 18:28:32.839014   10884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 18:28:32.839056   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 18:28:32.872491   10884 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 18:28:32.879599   10884 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 18:28:32.886336   10884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 18:28:32.889618   10884 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 18:28:32.889653   10884 kubeadm.go:401] StartCluster: {Name:addons-335994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-335994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 18:28:32.889709   10884 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:28:32.889748   10884 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:28:32.915270   10884 cri.go:96] found id: ""
	I1225 18:28:32.915333   10884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 18:28:32.922974   10884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 18:28:32.930562   10884 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 18:28:32.930614   10884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 18:28:32.937781   10884 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 18:28:32.937794   10884 kubeadm.go:158] found existing configuration files:
	
	I1225 18:28:32.937840   10884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1225 18:28:32.944824   10884 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 18:28:32.944868   10884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 18:28:32.951774   10884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1225 18:28:32.958860   10884 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 18:28:32.958912   10884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 18:28:32.965930   10884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1225 18:28:32.973500   10884 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 18:28:32.973556   10884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 18:28:32.980065   10884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1225 18:28:32.986963   10884 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 18:28:32.987002   10884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 18:28:32.993657   10884 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 18:28:33.056037   10884 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 18:28:33.113544   10884 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 18:28:43.405881   10884 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1225 18:28:43.405990   10884 kubeadm.go:319] [preflight] Running pre-flight checks
	I1225 18:28:43.406135   10884 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1225 18:28:43.406202   10884 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1225 18:28:43.406234   10884 kubeadm.go:319] OS: Linux
	I1225 18:28:43.406278   10884 kubeadm.go:319] CGROUPS_CPU: enabled
	I1225 18:28:43.406323   10884 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1225 18:28:43.406378   10884 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1225 18:28:43.406421   10884 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1225 18:28:43.406482   10884 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1225 18:28:43.406552   10884 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1225 18:28:43.406613   10884 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1225 18:28:43.406654   10884 kubeadm.go:319] CGROUPS_IO: enabled
	I1225 18:28:43.406751   10884 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 18:28:43.406884   10884 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 18:28:43.407035   10884 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1225 18:28:43.407094   10884 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 18:28:43.409494   10884 out.go:252]   - Generating certificates and keys ...
	I1225 18:28:43.409559   10884 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1225 18:28:43.409625   10884 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1225 18:28:43.409683   10884 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 18:28:43.409737   10884 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1225 18:28:43.409789   10884 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1225 18:28:43.409839   10884 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1225 18:28:43.409886   10884 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1225 18:28:43.410010   10884 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-335994 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1225 18:28:43.410058   10884 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1225 18:28:43.410158   10884 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-335994 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1225 18:28:43.410227   10884 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 18:28:43.410297   10884 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 18:28:43.410346   10884 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1225 18:28:43.410394   10884 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 18:28:43.410438   10884 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 18:28:43.410487   10884 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1225 18:28:43.410536   10884 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 18:28:43.410595   10884 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 18:28:43.410647   10884 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 18:28:43.410719   10884 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 18:28:43.410784   10884 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 18:28:43.412112   10884 out.go:252]   - Booting up control plane ...
	I1225 18:28:43.412190   10884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 18:28:43.412257   10884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 18:28:43.412317   10884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 18:28:43.412408   10884 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 18:28:43.412490   10884 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 18:28:43.412585   10884 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 18:28:43.412656   10884 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 18:28:43.412690   10884 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 18:28:43.412807   10884 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 18:28:43.412945   10884 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 18:28:43.413009   10884 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.421291ms
	I1225 18:28:43.413106   10884 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 18:28:43.413183   10884 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1225 18:28:43.413265   10884 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 18:28:43.413346   10884 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1225 18:28:43.413424   10884 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.549163462s
	I1225 18:28:43.413485   10884 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.217863696s
	I1225 18:28:43.413547   10884 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501931154s
	I1225 18:28:43.413636   10884 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 18:28:43.413750   10884 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 18:28:43.413803   10884 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 18:28:43.413989   10884 kubeadm.go:319] [mark-control-plane] Marking the node addons-335994 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 18:28:43.414048   10884 kubeadm.go:319] [bootstrap-token] Using token: b4m3d6.3w5mx6jfq5f2yzn3
	I1225 18:28:43.416047   10884 out.go:252]   - Configuring RBAC rules ...
	I1225 18:28:43.416133   10884 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 18:28:43.416234   10884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 18:28:43.416386   10884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 18:28:43.416539   10884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 18:28:43.416716   10884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 18:28:43.416855   10884 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 18:28:43.417054   10884 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 18:28:43.417114   10884 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1225 18:28:43.417158   10884 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1225 18:28:43.417165   10884 kubeadm.go:319] 
	I1225 18:28:43.417219   10884 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1225 18:28:43.417225   10884 kubeadm.go:319] 
	I1225 18:28:43.417287   10884 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1225 18:28:43.417293   10884 kubeadm.go:319] 
	I1225 18:28:43.417312   10884 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1225 18:28:43.417374   10884 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 18:28:43.417425   10884 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 18:28:43.417431   10884 kubeadm.go:319] 
	I1225 18:28:43.417487   10884 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1225 18:28:43.417494   10884 kubeadm.go:319] 
	I1225 18:28:43.417530   10884 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 18:28:43.417545   10884 kubeadm.go:319] 
	I1225 18:28:43.417591   10884 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1225 18:28:43.417657   10884 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 18:28:43.417711   10884 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 18:28:43.417716   10884 kubeadm.go:319] 
	I1225 18:28:43.417790   10884 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 18:28:43.417858   10884 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1225 18:28:43.417864   10884 kubeadm.go:319] 
	I1225 18:28:43.417970   10884 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token b4m3d6.3w5mx6jfq5f2yzn3 \
	I1225 18:28:43.418077   10884 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 \
	I1225 18:28:43.418122   10884 kubeadm.go:319] 	--control-plane 
	I1225 18:28:43.418139   10884 kubeadm.go:319] 
	I1225 18:28:43.418266   10884 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1225 18:28:43.418276   10884 kubeadm.go:319] 
	I1225 18:28:43.418381   10884 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token b4m3d6.3w5mx6jfq5f2yzn3 \
	I1225 18:28:43.418487   10884 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 
	I1225 18:28:43.418497   10884 cni.go:84] Creating CNI manager for ""
	I1225 18:28:43.418503   10884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 18:28:43.420509   10884 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1225 18:28:43.421503   10884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 18:28:43.425700   10884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1225 18:28:43.425714   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1225 18:28:43.438949   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 18:28:43.636665   10884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 18:28:43.636800   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:43.636869   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-335994 minikube.k8s.io/updated_at=2025_12_25T18_28_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef minikube.k8s.io/name=addons-335994 minikube.k8s.io/primary=true
	I1225 18:28:43.713284   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:43.713299   10884 ops.go:34] apiserver oom_adj: -16
	I1225 18:28:44.214107   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:44.714296   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:45.213324   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:45.714120   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:46.213448   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:46.714255   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:47.214125   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:47.713301   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:48.214216   10884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 18:28:48.277068   10884 kubeadm.go:1114] duration metric: took 4.640307066s to wait for elevateKubeSystemPrivileges
	I1225 18:28:48.277106   10884 kubeadm.go:403] duration metric: took 15.387453742s to StartCluster
	I1225 18:28:48.277141   10884 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:48.277355   10884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:28:48.277808   10884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 18:28:48.278058   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 18:28:48.278104   10884 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 18:28:48.278198   10884 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1225 18:28:48.278316   10884 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:28:48.278327   10884 addons.go:70] Setting yakd=true in profile "addons-335994"
	I1225 18:28:48.278331   10884 addons.go:70] Setting gcp-auth=true in profile "addons-335994"
	I1225 18:28:48.278351   10884 addons.go:239] Setting addon yakd=true in "addons-335994"
	I1225 18:28:48.278360   10884 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-335994"
	I1225 18:28:48.278361   10884 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-335994"
	I1225 18:28:48.278377   10884 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-335994"
	I1225 18:28:48.278390   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.278385   10884 addons.go:70] Setting cloud-spanner=true in profile "addons-335994"
	I1225 18:28:48.278404   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.278424   10884 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-335994"
	I1225 18:28:48.278428   10884 addons.go:239] Setting addon cloud-spanner=true in "addons-335994"
	I1225 18:28:48.278416   10884 addons.go:70] Setting default-storageclass=true in profile "addons-335994"
	I1225 18:28:48.278453   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.278477   10884 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-335994"
	I1225 18:28:48.278487   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.278561   10884 addons.go:70] Setting registry=true in profile "addons-335994"
	I1225 18:28:48.278599   10884 addons.go:239] Setting addon registry=true in "addons-335994"
	I1225 18:28:48.278647   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.278879   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.278946   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.278960   10884 addons.go:70] Setting inspektor-gadget=true in profile "addons-335994"
	I1225 18:28:48.278966   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.278975   10884 addons.go:239] Setting addon inspektor-gadget=true in "addons-335994"
	I1225 18:28:48.278995   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.279009   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.279152   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.279164   10884 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-335994"
	I1225 18:28:48.279215   10884 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-335994"
	I1225 18:28:48.279433   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.279554   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.279661   10884 addons.go:70] Setting volumesnapshots=true in profile "addons-335994"
	I1225 18:28:48.279683   10884 addons.go:239] Setting addon volumesnapshots=true in "addons-335994"
	I1225 18:28:48.278946   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.279885   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.280223   10884 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-335994"
	I1225 18:28:48.280321   10884 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-335994"
	I1225 18:28:48.280348   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.280801   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.281006   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.281111   10884 out.go:179] * Verifying Kubernetes components...
	I1225 18:28:48.281411   10884 addons.go:70] Setting ingress=true in profile "addons-335994"
	I1225 18:28:48.281432   10884 addons.go:239] Setting addon ingress=true in "addons-335994"
	I1225 18:28:48.281463   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.282007   10884 addons.go:70] Setting volcano=true in profile "addons-335994"
	I1225 18:28:48.282182   10884 addons.go:239] Setting addon volcano=true in "addons-335994"
	I1225 18:28:48.282355   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.279152   10884 addons.go:70] Setting metrics-server=true in profile "addons-335994"
	I1225 18:28:48.282610   10884 addons.go:239] Setting addon metrics-server=true in "addons-335994"
	I1225 18:28:48.282635   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.282684   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 18:28:48.282099   10884 addons.go:70] Setting registry-creds=true in profile "addons-335994"
	I1225 18:28:48.283004   10884 addons.go:239] Setting addon registry-creds=true in "addons-335994"
	I1225 18:28:48.283235   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.282111   10884 addons.go:70] Setting storage-provisioner=true in profile "addons-335994"
	I1225 18:28:48.283500   10884 addons.go:239] Setting addon storage-provisioner=true in "addons-335994"
	I1225 18:28:48.283538   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.278355   10884 mustload.go:66] Loading cluster: addons-335994
	I1225 18:28:48.282137   10884 addons.go:70] Setting ingress-dns=true in profile "addons-335994"
	I1225 18:28:48.283785   10884 addons.go:239] Setting addon ingress-dns=true in "addons-335994"
	I1225 18:28:48.283814   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.287484   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.287524   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.288460   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.289663   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.295223   10884 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:28:48.295556   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.296697   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.298683   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.324488   10884 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1225 18:28:48.326982   10884 out.go:179]   - Using image docker.io/registry:3.0.0
	I1225 18:28:48.328968   10884 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1225 18:28:48.329001   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1225 18:28:48.329062   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.349733   10884 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1225 18:28:48.350802   10884 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.6
	I1225 18:28:48.351365   10884 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1225 18:28:48.351387   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1225 18:28:48.351458   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.352604   10884 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1225 18:28:48.352623   10884 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1225 18:28:48.352681   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	W1225 18:28:48.356267   10884 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1225 18:28:48.364317   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1225 18:28:48.367080   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1225 18:28:48.368197   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1225 18:28:48.369417   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1225 18:28:48.370694   10884 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1225 18:28:48.370732   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1225 18:28:48.371795   10884 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1225 18:28:48.371815   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1225 18:28:48.371890   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.372240   10884 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1225 18:28:48.374627   10884 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1225 18:28:48.375063   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.375757   10884 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1225 18:28:48.376050   10884 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1225 18:28:48.376871   10884 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1225 18:28:48.376109   10884 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1225 18:28:48.377019   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1225 18:28:48.377115   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.376931   10884 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1225 18:28:48.377177   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1225 18:28:48.377243   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.376142   10884 addons.go:239] Setting addon default-storageclass=true in "addons-335994"
	I1225 18:28:48.377348   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.377797   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.377856   10884 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-335994"
	I1225 18:28:48.377889   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:48.378091   10884 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1225 18:28:48.378101   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1225 18:28:48.378177   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.378786   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:48.378844   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1225 18:28:48.383518   10884 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1225 18:28:48.384032   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1225 18:28:48.386345   10884 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1225 18:28:48.389521   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1225 18:28:48.389592   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.390268   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1225 18:28:48.393426   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1225 18:28:48.393445   10884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1225 18:28:48.393516   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.396163   10884 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1225 18:28:48.397314   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1225 18:28:48.397333   10884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1225 18:28:48.397412   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.412202   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.422464   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.424499   10884 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1225 18:28:48.427692   10884 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 18:28:48.430180   10884 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 18:28:48.430207   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 18:28:48.430268   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.430953   10884 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1225 18:28:48.430970   10884 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1225 18:28:48.431024   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.432867   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 18:28:48.446329   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.456174   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.456255   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.459163   10884 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1225 18:28:48.459250   10884 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1225 18:28:48.460499   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.460945   10884 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1225 18:28:48.460966   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1225 18:28:48.461021   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.463794   10884 out.go:179]   - Using image docker.io/busybox:stable
	I1225 18:28:48.467423   10884 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1225 18:28:48.467442   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1225 18:28:48.467502   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.473453   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.476678   10884 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 18:28:48.476699   10884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 18:28:48.476750   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:48.477798   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.488165   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.491591   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.503220   10884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 18:28:48.516073   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.524819   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.527822   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.531097   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:48.533837   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	W1225 18:28:48.533906   10884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1225 18:28:48.533944   10884 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I1225 18:28:48.599037   10884 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1225 18:28:48.599086   10884 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1225 18:28:48.609625   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1225 18:28:48.617324   10884 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1225 18:28:48.617345   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1225 18:28:48.625832   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1225 18:28:48.627440   10884 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1225 18:28:48.627462   10884 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1225 18:28:48.632675   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1225 18:28:48.638172   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1225 18:28:48.663436   10884 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1225 18:28:48.663465   10884 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1225 18:28:48.663954   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1225 18:28:48.670109   10884 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1225 18:28:48.670208   10884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1225 18:28:48.686366   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1225 18:28:48.686393   10884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1225 18:28:48.695470   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 18:28:48.695633   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1225 18:28:48.706875   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1225 18:28:48.709507   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1225 18:28:48.712807   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 18:28:48.717561   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1225 18:28:48.717623   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1225 18:28:48.726270   10884 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1225 18:28:48.726304   10884 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1225 18:28:48.727964   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1225 18:28:48.727985   10884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1225 18:28:48.741295   10884 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1225 18:28:48.741348   10884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1225 18:28:48.754446   10884 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1225 18:28:48.754467   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1225 18:28:48.766500   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1225 18:28:48.766525   10884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1225 18:28:48.769619   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1225 18:28:48.769699   10884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1225 18:28:48.774626   10884 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1225 18:28:48.774645   10884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1225 18:28:48.788259   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1225 18:28:48.819348   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 18:28:48.819488   10884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1225 18:28:48.836568   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1225 18:28:48.836642   10884 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1225 18:28:48.855559   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1225 18:28:48.855630   10884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1225 18:28:48.884455   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1225 18:28:48.903067   10884 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1225 18:28:48.903167   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1225 18:28:48.911420   10884 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1225 18:28:48.911511   10884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1225 18:28:48.942329   10884 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1225 18:28:48.942351   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1225 18:28:48.953937   10884 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1225 18:28:48.955888   10884 node_ready.go:35] waiting up to 6m0s for node "addons-335994" to be "Ready" ...
	I1225 18:28:48.960613   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1225 18:28:49.013999   10884 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1225 18:28:49.014033   10884 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1225 18:28:49.028471   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1225 18:28:49.071223   10884 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1225 18:28:49.071253   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1225 18:28:49.180988   10884 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1225 18:28:49.181009   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1225 18:28:49.290813   10884 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1225 18:28:49.290921   10884 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1225 18:28:49.311426   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1225 18:28:49.459177   10884 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-335994" context rescaled to 1 replicas
	I1225 18:28:49.964159   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.331439505s)
	I1225 18:28:49.964177   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.325957412s)
	I1225 18:28:49.964198   10884 addons.go:495] Verifying addon ingress=true in "addons-335994"
	I1225 18:28:49.964207   10884 addons.go:495] Verifying addon registry=true in "addons-335994"
	I1225 18:28:49.964518   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.30052844s)
	I1225 18:28:49.964643   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.268987434s)
	I1225 18:28:49.964814   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.269313263s)
	I1225 18:28:49.965155   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.258250414s)
	I1225 18:28:49.965247   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.255716919s)
	I1225 18:28:49.965278   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.252420022s)
	I1225 18:28:49.965446   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.177157552s)
	I1225 18:28:49.966036   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.081548465s)
	I1225 18:28:49.966059   10884 addons.go:495] Verifying addon metrics-server=true in "addons-335994"
	I1225 18:28:49.966712   10884 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-335994 service yakd-dashboard -n yakd-dashboard
	
	I1225 18:28:49.966743   10884 out.go:179] * Verifying registry addon...
	I1225 18:28:49.966719   10884 out.go:179] * Verifying ingress addon...
	I1225 18:28:49.969833   10884 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1225 18:28:49.969874   10884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1225 18:28:49.976727   10884 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1225 18:28:49.977226   10884 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1225 18:28:49.977246   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:49.977589   10884 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1225 18:28:49.977604   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:50.473987   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:50.474275   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:50.494424   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.5337702s)
	W1225 18:28:50.494473   10884 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1225 18:28:50.494505   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.466002524s)
	I1225 18:28:50.494718   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.183193235s)
	I1225 18:28:50.494741   10884 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-335994"
	I1225 18:28:50.496492   10884 out.go:179] * Verifying csi-hostpath-driver addon...
	I1225 18:28:50.498702   10884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1225 18:28:50.505836   10884 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1225 18:28:50.505858   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:50.785421   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1225 18:28:50.959417   10884 node_ready.go:57] node "addons-335994" has "Ready":"False" status (will retry)
	I1225 18:28:50.973251   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:50.973428   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:51.073391   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:51.473618   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:51.473671   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:51.575086   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:51.972881   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:51.973129   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:52.074017   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:52.473330   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:52.473404   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:52.501608   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:52.973108   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:52.973291   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:53.080316   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:53.256983   10884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47151196s)
	W1225 18:28:53.458842   10884 node_ready.go:57] node "addons-335994" has "Ready":"False" status (will retry)
	I1225 18:28:53.473451   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:53.473552   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:53.574363   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:53.973139   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:53.973195   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:54.001938   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:54.473097   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:54.473262   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:54.501666   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:54.972865   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:54.973044   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:55.074252   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:55.473765   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:55.473811   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:55.502208   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1225 18:28:55.959039   10884 node_ready.go:57] node "addons-335994" has "Ready":"False" status (will retry)
	I1225 18:28:55.972698   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:55.973118   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:55.983004   10884 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1225 18:28:55.983074   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:56.001262   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:56.073049   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:56.105309   10884 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1225 18:28:56.119544   10884 addons.go:239] Setting addon gcp-auth=true in "addons-335994"
	I1225 18:28:56.119596   10884 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:28:56.120000   10884 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:28:56.137156   10884 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1225 18:28:56.137230   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:28:56.154426   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:28:56.241472   10884 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1225 18:28:56.242853   10884 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1225 18:28:56.243965   10884 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1225 18:28:56.243983   10884 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1225 18:28:56.257245   10884 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1225 18:28:56.257265   10884 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1225 18:28:56.270286   10884 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1225 18:28:56.270309   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1225 18:28:56.282988   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1225 18:28:56.473185   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:56.473314   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:56.501977   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:56.583827   10884 addons.go:495] Verifying addon gcp-auth=true in "addons-335994"
	I1225 18:28:56.585133   10884 out.go:179] * Verifying gcp-auth addon...
	I1225 18:28:56.587509   10884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1225 18:28:56.589547   10884 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1225 18:28:56.589560   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:28:56.972345   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:56.973036   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:57.001181   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:57.090371   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:28:57.472985   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:57.473469   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:57.502096   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:57.590983   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:28:57.972809   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:57.972876   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:58.002070   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:58.090395   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1225 18:28:58.459313   10884 node_ready.go:57] node "addons-335994" has "Ready":"False" status (will retry)
	I1225 18:28:58.472809   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:58.473002   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:58.501646   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:58.590279   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:28:58.972714   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:58.973368   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:59.002060   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:59.092469   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:28:59.472739   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:28:59.473480   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:59.502042   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:28:59.590557   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:28:59.973644   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:28:59.973714   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:00.002161   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:00.090764   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:00.472773   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:00.472953   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:00.501936   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:00.590142   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1225 18:29:00.958795   10884 node_ready.go:57] node "addons-335994" has "Ready":"False" status (will retry)
	I1225 18:29:00.973164   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:00.973345   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:01.001991   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:01.090567   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:01.459548   10884 node_ready.go:49] node "addons-335994" is "Ready"
	I1225 18:29:01.459576   10884 node_ready.go:38] duration metric: took 12.503647988s for node "addons-335994" to be "Ready" ...
	I1225 18:29:01.459589   10884 api_server.go:52] waiting for apiserver process to appear ...
	I1225 18:29:01.459637   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 18:29:01.473487   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:01.473727   10884 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1225 18:29:01.473742   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:01.476135   10884 api_server.go:72] duration metric: took 13.197995432s to wait for apiserver process to appear ...
	I1225 18:29:01.476183   10884 api_server.go:88] waiting for apiserver healthz status ...
	I1225 18:29:01.476205   10884 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1225 18:29:01.481117   10884 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1225 18:29:01.482015   10884 api_server.go:141] control plane version: v1.34.3
	I1225 18:29:01.482045   10884 api_server.go:131] duration metric: took 5.850963ms to wait for apiserver health ...
	I1225 18:29:01.482056   10884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 18:29:01.485832   10884 system_pods.go:59] 20 kube-system pods found
	I1225 18:29:01.485875   10884 system_pods.go:61] "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Pending
	I1225 18:29:01.485936   10884 system_pods.go:61] "coredns-66bc5c9577-vq4f4" [842d7785-6e17-46f4-8932-8d200ed18c4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 18:29:01.485947   10884 system_pods.go:61] "csi-hostpath-attacher-0" [a204f537-befa-47ca-9637-0e47dbafda8d] Pending
	I1225 18:29:01.485956   10884 system_pods.go:61] "csi-hostpath-resizer-0" [3da3d42e-79eb-45e2-9464-fcd45c7843b3] Pending
	I1225 18:29:01.485961   10884 system_pods.go:61] "csi-hostpathplugin-zgpkw" [f116da97-cea1-48be-9e7b-cebaefd3bdc1] Pending
	I1225 18:29:01.485969   10884 system_pods.go:61] "etcd-addons-335994" [bb03e54d-6ac8-4009-9bb9-f3e3d7decdfc] Running
	I1225 18:29:01.485975   10884 system_pods.go:61] "kindnet-pfdzw" [dfa35684-c6cc-43ef-aae2-c64d94f32753] Running
	I1225 18:29:01.485979   10884 system_pods.go:61] "kube-apiserver-addons-335994" [4acbe065-d19e-4fbc-9ee7-2b0ad97427e5] Running
	I1225 18:29:01.485982   10884 system_pods.go:61] "kube-controller-manager-addons-335994" [f8670a07-2f2c-4053-b972-76c93f626f60] Running
	I1225 18:29:01.485993   10884 system_pods.go:61] "kube-ingress-dns-minikube" [5fb52292-2880-4819-b4cb-82d92aec2725] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 18:29:01.485999   10884 system_pods.go:61] "kube-proxy-znfvz" [750b4578-898a-4a84-91cf-7dce2eaba8ed] Running
	I1225 18:29:01.486011   10884 system_pods.go:61] "kube-scheduler-addons-335994" [9e3362a1-239c-4213-9379-e5bd96312a14] Running
	I1225 18:29:01.486019   10884 system_pods.go:61] "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Pending
	I1225 18:29:01.486028   10884 system_pods.go:61] "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1225 18:29:01.486039   10884 system_pods.go:61] "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 18:29:01.486049   10884 system_pods.go:61] "registry-creds-764b6fb674-4gpph" [1d56b065-e2a7-4448-984a-584747a42590] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1225 18:29:01.486057   10884 system_pods.go:61] "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 18:29:01.486063   10884 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7w7cv" [a4fe17d7-4146-4564-8a81-01c4aaabd0b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:01.486070   10884 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gxt9n" [04f68934-44b0-4812-b5e7-193c9e957db6] Pending
	I1225 18:29:01.486079   10884 system_pods.go:61] "storage-provisioner" [a4ea7ee3-63af-420b-a3f5-a0f07d86e372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 18:29:01.486088   10884 system_pods.go:74] duration metric: took 4.025575ms to wait for pod list to return data ...
	I1225 18:29:01.486101   10884 default_sa.go:34] waiting for default service account to be created ...
	I1225 18:29:01.488037   10884 default_sa.go:45] found service account: "default"
	I1225 18:29:01.488054   10884 default_sa.go:55] duration metric: took 1.94799ms for default service account to be created ...
	I1225 18:29:01.488061   10884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 18:29:01.491047   10884 system_pods.go:86] 20 kube-system pods found
	I1225 18:29:01.491073   10884 system_pods.go:89] "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Pending
	I1225 18:29:01.491084   10884 system_pods.go:89] "coredns-66bc5c9577-vq4f4" [842d7785-6e17-46f4-8932-8d200ed18c4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 18:29:01.491090   10884 system_pods.go:89] "csi-hostpath-attacher-0" [a204f537-befa-47ca-9637-0e47dbafda8d] Pending
	I1225 18:29:01.491096   10884 system_pods.go:89] "csi-hostpath-resizer-0" [3da3d42e-79eb-45e2-9464-fcd45c7843b3] Pending
	I1225 18:29:01.491102   10884 system_pods.go:89] "csi-hostpathplugin-zgpkw" [f116da97-cea1-48be-9e7b-cebaefd3bdc1] Pending
	I1225 18:29:01.491108   10884 system_pods.go:89] "etcd-addons-335994" [bb03e54d-6ac8-4009-9bb9-f3e3d7decdfc] Running
	I1225 18:29:01.491115   10884 system_pods.go:89] "kindnet-pfdzw" [dfa35684-c6cc-43ef-aae2-c64d94f32753] Running
	I1225 18:29:01.491132   10884 system_pods.go:89] "kube-apiserver-addons-335994" [4acbe065-d19e-4fbc-9ee7-2b0ad97427e5] Running
	I1225 18:29:01.491138   10884 system_pods.go:89] "kube-controller-manager-addons-335994" [f8670a07-2f2c-4053-b972-76c93f626f60] Running
	I1225 18:29:01.491148   10884 system_pods.go:89] "kube-ingress-dns-minikube" [5fb52292-2880-4819-b4cb-82d92aec2725] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 18:29:01.491157   10884 system_pods.go:89] "kube-proxy-znfvz" [750b4578-898a-4a84-91cf-7dce2eaba8ed] Running
	I1225 18:29:01.491164   10884 system_pods.go:89] "kube-scheduler-addons-335994" [9e3362a1-239c-4213-9379-e5bd96312a14] Running
	I1225 18:29:01.491171   10884 system_pods.go:89] "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Pending
	I1225 18:29:01.491183   10884 system_pods.go:89] "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1225 18:29:01.491194   10884 system_pods.go:89] "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 18:29:01.491204   10884 system_pods.go:89] "registry-creds-764b6fb674-4gpph" [1d56b065-e2a7-4448-984a-584747a42590] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1225 18:29:01.491212   10884 system_pods.go:89] "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 18:29:01.491220   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w7cv" [a4fe17d7-4146-4564-8a81-01c4aaabd0b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:01.491229   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gxt9n" [04f68934-44b0-4812-b5e7-193c9e957db6] Pending
	I1225 18:29:01.491238   10884 system_pods.go:89] "storage-provisioner" [a4ea7ee3-63af-420b-a3f5-a0f07d86e372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 18:29:01.491265   10884 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1225 18:29:01.501653   10884 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1225 18:29:01.501674   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:01.590355   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:01.742484   10884 system_pods.go:86] 20 kube-system pods found
	I1225 18:29:01.742529   10884 system_pods.go:89] "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1225 18:29:01.742539   10884 system_pods.go:89] "coredns-66bc5c9577-vq4f4" [842d7785-6e17-46f4-8932-8d200ed18c4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 18:29:01.742550   10884 system_pods.go:89] "csi-hostpath-attacher-0" [a204f537-befa-47ca-9637-0e47dbafda8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1225 18:29:01.742558   10884 system_pods.go:89] "csi-hostpath-resizer-0" [3da3d42e-79eb-45e2-9464-fcd45c7843b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1225 18:29:01.742567   10884 system_pods.go:89] "csi-hostpathplugin-zgpkw" [f116da97-cea1-48be-9e7b-cebaefd3bdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1225 18:29:01.742577   10884 system_pods.go:89] "etcd-addons-335994" [bb03e54d-6ac8-4009-9bb9-f3e3d7decdfc] Running
	I1225 18:29:01.742584   10884 system_pods.go:89] "kindnet-pfdzw" [dfa35684-c6cc-43ef-aae2-c64d94f32753] Running
	I1225 18:29:01.742590   10884 system_pods.go:89] "kube-apiserver-addons-335994" [4acbe065-d19e-4fbc-9ee7-2b0ad97427e5] Running
	I1225 18:29:01.742595   10884 system_pods.go:89] "kube-controller-manager-addons-335994" [f8670a07-2f2c-4053-b972-76c93f626f60] Running
	I1225 18:29:01.742604   10884 system_pods.go:89] "kube-ingress-dns-minikube" [5fb52292-2880-4819-b4cb-82d92aec2725] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 18:29:01.742610   10884 system_pods.go:89] "kube-proxy-znfvz" [750b4578-898a-4a84-91cf-7dce2eaba8ed] Running
	I1225 18:29:01.742616   10884 system_pods.go:89] "kube-scheduler-addons-335994" [9e3362a1-239c-4213-9379-e5bd96312a14] Running
	I1225 18:29:01.742627   10884 system_pods.go:89] "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 18:29:01.742638   10884 system_pods.go:89] "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1225 18:29:01.742646   10884 system_pods.go:89] "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 18:29:01.742655   10884 system_pods.go:89] "registry-creds-764b6fb674-4gpph" [1d56b065-e2a7-4448-984a-584747a42590] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1225 18:29:01.742664   10884 system_pods.go:89] "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 18:29:01.742672   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w7cv" [a4fe17d7-4146-4564-8a81-01c4aaabd0b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:01.742680   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gxt9n" [04f68934-44b0-4812-b5e7-193c9e957db6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:01.742687   10884 system_pods.go:89] "storage-provisioner" [a4ea7ee3-63af-420b-a3f5-a0f07d86e372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 18:29:01.973954   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:01.974001   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:02.001285   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:02.090539   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:02.124183   10884 system_pods.go:86] 20 kube-system pods found
	I1225 18:29:02.124224   10884 system_pods.go:89] "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1225 18:29:02.124263   10884 system_pods.go:89] "coredns-66bc5c9577-vq4f4" [842d7785-6e17-46f4-8932-8d200ed18c4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 18:29:02.124278   10884 system_pods.go:89] "csi-hostpath-attacher-0" [a204f537-befa-47ca-9637-0e47dbafda8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1225 18:29:02.124289   10884 system_pods.go:89] "csi-hostpath-resizer-0" [3da3d42e-79eb-45e2-9464-fcd45c7843b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1225 18:29:02.124306   10884 system_pods.go:89] "csi-hostpathplugin-zgpkw" [f116da97-cea1-48be-9e7b-cebaefd3bdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1225 18:29:02.124317   10884 system_pods.go:89] "etcd-addons-335994" [bb03e54d-6ac8-4009-9bb9-f3e3d7decdfc] Running
	I1225 18:29:02.124332   10884 system_pods.go:89] "kindnet-pfdzw" [dfa35684-c6cc-43ef-aae2-c64d94f32753] Running
	I1225 18:29:02.124343   10884 system_pods.go:89] "kube-apiserver-addons-335994" [4acbe065-d19e-4fbc-9ee7-2b0ad97427e5] Running
	I1225 18:29:02.124351   10884 system_pods.go:89] "kube-controller-manager-addons-335994" [f8670a07-2f2c-4053-b972-76c93f626f60] Running
	I1225 18:29:02.124361   10884 system_pods.go:89] "kube-ingress-dns-minikube" [5fb52292-2880-4819-b4cb-82d92aec2725] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 18:29:02.124368   10884 system_pods.go:89] "kube-proxy-znfvz" [750b4578-898a-4a84-91cf-7dce2eaba8ed] Running
	I1225 18:29:02.124378   10884 system_pods.go:89] "kube-scheduler-addons-335994" [9e3362a1-239c-4213-9379-e5bd96312a14] Running
	I1225 18:29:02.124388   10884 system_pods.go:89] "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 18:29:02.124398   10884 system_pods.go:89] "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1225 18:29:02.124428   10884 system_pods.go:89] "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 18:29:02.124441   10884 system_pods.go:89] "registry-creds-764b6fb674-4gpph" [1d56b065-e2a7-4448-984a-584747a42590] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1225 18:29:02.124451   10884 system_pods.go:89] "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 18:29:02.124461   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w7cv" [a4fe17d7-4146-4564-8a81-01c4aaabd0b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:02.124471   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gxt9n" [04f68934-44b0-4812-b5e7-193c9e957db6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:02.124480   10884 system_pods.go:89] "storage-provisioner" [a4ea7ee3-63af-420b-a3f5-a0f07d86e372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 18:29:02.473411   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:02.473510   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:02.502112   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:02.587062   10884 system_pods.go:86] 20 kube-system pods found
	I1225 18:29:02.587096   10884 system_pods.go:89] "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1225 18:29:02.587105   10884 system_pods.go:89] "coredns-66bc5c9577-vq4f4" [842d7785-6e17-46f4-8932-8d200ed18c4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 18:29:02.587111   10884 system_pods.go:89] "csi-hostpath-attacher-0" [a204f537-befa-47ca-9637-0e47dbafda8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1225 18:29:02.587117   10884 system_pods.go:89] "csi-hostpath-resizer-0" [3da3d42e-79eb-45e2-9464-fcd45c7843b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1225 18:29:02.587122   10884 system_pods.go:89] "csi-hostpathplugin-zgpkw" [f116da97-cea1-48be-9e7b-cebaefd3bdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1225 18:29:02.587127   10884 system_pods.go:89] "etcd-addons-335994" [bb03e54d-6ac8-4009-9bb9-f3e3d7decdfc] Running
	I1225 18:29:02.587131   10884 system_pods.go:89] "kindnet-pfdzw" [dfa35684-c6cc-43ef-aae2-c64d94f32753] Running
	I1225 18:29:02.587135   10884 system_pods.go:89] "kube-apiserver-addons-335994" [4acbe065-d19e-4fbc-9ee7-2b0ad97427e5] Running
	I1225 18:29:02.587139   10884 system_pods.go:89] "kube-controller-manager-addons-335994" [f8670a07-2f2c-4053-b972-76c93f626f60] Running
	I1225 18:29:02.587145   10884 system_pods.go:89] "kube-ingress-dns-minikube" [5fb52292-2880-4819-b4cb-82d92aec2725] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 18:29:02.587151   10884 system_pods.go:89] "kube-proxy-znfvz" [750b4578-898a-4a84-91cf-7dce2eaba8ed] Running
	I1225 18:29:02.587162   10884 system_pods.go:89] "kube-scheduler-addons-335994" [9e3362a1-239c-4213-9379-e5bd96312a14] Running
	I1225 18:29:02.587167   10884 system_pods.go:89] "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 18:29:02.587175   10884 system_pods.go:89] "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1225 18:29:02.587180   10884 system_pods.go:89] "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 18:29:02.587189   10884 system_pods.go:89] "registry-creds-764b6fb674-4gpph" [1d56b065-e2a7-4448-984a-584747a42590] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1225 18:29:02.587194   10884 system_pods.go:89] "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 18:29:02.587201   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w7cv" [a4fe17d7-4146-4564-8a81-01c4aaabd0b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:02.587207   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gxt9n" [04f68934-44b0-4812-b5e7-193c9e957db6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:02.587211   10884 system_pods.go:89] "storage-provisioner" [a4ea7ee3-63af-420b-a3f5-a0f07d86e372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 18:29:02.590873   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:02.974446   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:02.975091   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:03.002775   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:03.052459   10884 system_pods.go:86] 20 kube-system pods found
	I1225 18:29:03.052503   10884 system_pods.go:89] "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1225 18:29:03.052511   10884 system_pods.go:89] "coredns-66bc5c9577-vq4f4" [842d7785-6e17-46f4-8932-8d200ed18c4b] Running
	I1225 18:29:03.052521   10884 system_pods.go:89] "csi-hostpath-attacher-0" [a204f537-befa-47ca-9637-0e47dbafda8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1225 18:29:03.052530   10884 system_pods.go:89] "csi-hostpath-resizer-0" [3da3d42e-79eb-45e2-9464-fcd45c7843b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1225 18:29:03.052538   10884 system_pods.go:89] "csi-hostpathplugin-zgpkw" [f116da97-cea1-48be-9e7b-cebaefd3bdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1225 18:29:03.052544   10884 system_pods.go:89] "etcd-addons-335994" [bb03e54d-6ac8-4009-9bb9-f3e3d7decdfc] Running
	I1225 18:29:03.052550   10884 system_pods.go:89] "kindnet-pfdzw" [dfa35684-c6cc-43ef-aae2-c64d94f32753] Running
	I1225 18:29:03.052555   10884 system_pods.go:89] "kube-apiserver-addons-335994" [4acbe065-d19e-4fbc-9ee7-2b0ad97427e5] Running
	I1225 18:29:03.052560   10884 system_pods.go:89] "kube-controller-manager-addons-335994" [f8670a07-2f2c-4053-b972-76c93f626f60] Running
	I1225 18:29:03.052569   10884 system_pods.go:89] "kube-ingress-dns-minikube" [5fb52292-2880-4819-b4cb-82d92aec2725] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1225 18:29:03.052574   10884 system_pods.go:89] "kube-proxy-znfvz" [750b4578-898a-4a84-91cf-7dce2eaba8ed] Running
	I1225 18:29:03.052580   10884 system_pods.go:89] "kube-scheduler-addons-335994" [9e3362a1-239c-4213-9379-e5bd96312a14] Running
	I1225 18:29:03.052607   10884 system_pods.go:89] "metrics-server-85b7d694d7-gbmzm" [297f143a-6dc2-4185-a40f-1367b02ad335] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1225 18:29:03.052617   10884 system_pods.go:89] "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1225 18:29:03.052631   10884 system_pods.go:89] "registry-6b586f9694-tkq87" [194d10e3-0678-4376-bd91-a96acdc8c845] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1225 18:29:03.052639   10884 system_pods.go:89] "registry-creds-764b6fb674-4gpph" [1d56b065-e2a7-4448-984a-584747a42590] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1225 18:29:03.052646   10884 system_pods.go:89] "registry-proxy-4kbxc" [e0bb1f28-5b44-45e5-aefb-aa253b9fffa4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1225 18:29:03.052655   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w7cv" [a4fe17d7-4146-4564-8a81-01c4aaabd0b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:03.052665   10884 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gxt9n" [04f68934-44b0-4812-b5e7-193c9e957db6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1225 18:29:03.052671   10884 system_pods.go:89] "storage-provisioner" [a4ea7ee3-63af-420b-a3f5-a0f07d86e372] Running
	I1225 18:29:03.052681   10884 system_pods.go:126] duration metric: took 1.564614234s to wait for k8s-apps to be running ...
	I1225 18:29:03.052690   10884 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 18:29:03.052738   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:29:03.070065   10884 system_svc.go:56] duration metric: took 17.365747ms WaitForService to wait for kubelet
	I1225 18:29:03.070102   10884 kubeadm.go:587] duration metric: took 14.79196572s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 18:29:03.070129   10884 node_conditions.go:102] verifying NodePressure condition ...
	I1225 18:29:03.073537   10884 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 18:29:03.073573   10884 node_conditions.go:123] node cpu capacity is 8
	I1225 18:29:03.073591   10884 node_conditions.go:105] duration metric: took 3.4562ms to run NodePressure ...
	I1225 18:29:03.073606   10884 start.go:242] waiting for startup goroutines ...
	I1225 18:29:03.091109   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:03.473247   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:03.473399   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:03.501964   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:03.590604   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:03.974245   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:03.974471   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:04.002678   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:04.091301   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:04.473548   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:04.473637   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:04.502188   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:04.590298   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:04.975100   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:04.975156   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:05.003096   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:05.091673   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:05.474315   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:05.474402   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:05.502386   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:05.592167   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:05.973994   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:05.974035   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:06.002365   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:06.091100   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:06.473378   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:06.473490   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:06.502519   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:06.591351   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:06.973568   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:06.973638   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:07.002168   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:07.090933   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:07.473062   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:07.473486   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:07.502881   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:07.590390   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:07.974194   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:07.974304   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:08.001933   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:08.090654   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:08.473437   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:08.473609   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:08.502953   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:08.590299   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:08.973667   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:08.973732   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:09.002447   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:09.090787   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:09.473465   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:09.473517   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:09.502608   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:09.591427   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:09.973822   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:09.973880   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:10.035500   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:10.090965   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:10.473518   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:10.473578   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:10.504010   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:10.590848   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:10.973574   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:10.973603   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:11.001971   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:11.091481   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:11.474276   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:11.474394   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:11.502342   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:11.590732   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:11.973402   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:11.973502   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:12.002523   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:12.091063   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:12.473218   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:12.473481   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:12.502690   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:12.590483   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:12.973720   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:12.973787   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:13.002297   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:13.091447   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:13.476948   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:13.476960   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:13.503093   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:13.591242   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:14.066003   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:14.066056   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:14.066062   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:14.166600   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:14.474582   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:14.474597   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:14.502367   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:14.591159   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:14.973727   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:14.973888   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:15.002237   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:15.091003   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:15.474188   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:15.474768   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:15.502389   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:15.591074   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:15.973667   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:15.973675   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:16.002174   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:16.093006   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:16.473618   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:16.473701   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:16.502537   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:16.590704   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:16.997937   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:16.998037   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:17.001063   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:17.090476   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:17.473430   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:17.473563   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:17.501543   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:17.591180   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:17.973673   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:17.973818   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:18.002606   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:18.091022   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:18.473965   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:18.473973   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:18.501997   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:18.590649   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:18.973072   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:18.973638   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:19.002802   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:19.090549   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:19.473983   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:19.474047   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:19.575006   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:19.589929   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:19.973178   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:19.973669   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:20.002717   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:20.090071   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:20.473076   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:20.473443   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:20.502003   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:20.590753   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:20.973310   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:20.973435   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:21.001917   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:21.090293   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:21.473800   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:21.473804   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:21.502017   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:21.591751   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:21.973279   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:21.973342   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:22.002401   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:22.090779   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:22.473126   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:22.473254   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:22.502082   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:22.590538   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:22.974372   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:22.974428   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:23.002658   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:23.091189   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:23.473698   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:23.473790   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:23.501417   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:23.590779   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:23.974977   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:23.975027   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:24.002456   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:24.091049   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:24.473839   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:24.474076   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:24.502149   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:24.590774   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:24.973327   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:24.973373   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:25.002208   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:25.090476   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:25.474286   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:25.474789   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1225 18:29:25.503330   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:25.590624   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:25.973369   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:25.974646   10884 kapi.go:107] duration metric: took 36.004769229s to wait for kubernetes.io/minikube-addons=registry ...
	I1225 18:29:26.002725   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:26.090208   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:26.473517   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:26.501976   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:26.592043   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:26.973482   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:27.002461   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:27.091158   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:27.475971   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:27.502576   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:27.592042   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:27.974418   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:28.003888   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:28.090647   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:28.474412   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:28.502494   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:28.590679   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:28.972550   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:29.002318   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:29.090814   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:29.473177   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:29.501626   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:29.591486   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:29.973709   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:30.002942   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:30.090785   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:30.473546   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:30.502487   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:30.591431   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:30.973476   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:31.002229   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:31.091003   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:31.473403   10884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1225 18:29:31.501739   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:31.590273   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:31.973834   10884 kapi.go:107] duration metric: took 42.003998824s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1225 18:29:32.003416   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:32.090848   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:32.502009   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:32.602835   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1225 18:29:33.002341   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:33.102202   10884 kapi.go:107] duration metric: took 36.514690328s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1225 18:29:33.104591   10884 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-335994 cluster.
	I1225 18:29:33.105768   10884 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1225 18:29:33.107026   10884 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1225 18:29:33.503387   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:34.001683   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:34.501830   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:35.002079   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:35.504702   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:36.002147   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:36.502827   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:37.001869   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:37.501775   10884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1225 18:29:38.001701   10884 kapi.go:107] duration metric: took 47.502997327s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1225 18:29:38.003310   10884 out.go:179] * Enabled addons: registry-creds, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, amd-gpu-device-plugin, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1225 18:29:38.004518   10884 addons.go:530] duration metric: took 49.726327711s for enable addons: enabled=[registry-creds cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher amd-gpu-device-plugin volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1225 18:29:38.004561   10884 start.go:247] waiting for cluster config update ...
	I1225 18:29:38.004579   10884 start.go:256] writing updated cluster config ...
	I1225 18:29:38.004920   10884 ssh_runner.go:195] Run: rm -f paused
	I1225 18:29:38.008734   10884 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 18:29:38.011502   10884 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vq4f4" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.014843   10884 pod_ready.go:94] pod "coredns-66bc5c9577-vq4f4" is "Ready"
	I1225 18:29:38.014861   10884 pod_ready.go:86] duration metric: took 3.340771ms for pod "coredns-66bc5c9577-vq4f4" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.016631   10884 pod_ready.go:83] waiting for pod "etcd-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.019881   10884 pod_ready.go:94] pod "etcd-addons-335994" is "Ready"
	I1225 18:29:38.019913   10884 pod_ready.go:86] duration metric: took 3.262566ms for pod "etcd-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.021514   10884 pod_ready.go:83] waiting for pod "kube-apiserver-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.024695   10884 pod_ready.go:94] pod "kube-apiserver-addons-335994" is "Ready"
	I1225 18:29:38.024712   10884 pod_ready.go:86] duration metric: took 3.18235ms for pod "kube-apiserver-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.026297   10884 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.413036   10884 pod_ready.go:94] pod "kube-controller-manager-addons-335994" is "Ready"
	I1225 18:29:38.413070   10884 pod_ready.go:86] duration metric: took 386.756511ms for pod "kube-controller-manager-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:38.613158   10884 pod_ready.go:83] waiting for pod "kube-proxy-znfvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:39.012633   10884 pod_ready.go:94] pod "kube-proxy-znfvz" is "Ready"
	I1225 18:29:39.012663   10884 pod_ready.go:86] duration metric: took 399.481019ms for pod "kube-proxy-znfvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:39.212304   10884 pod_ready.go:83] waiting for pod "kube-scheduler-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:39.612109   10884 pod_ready.go:94] pod "kube-scheduler-addons-335994" is "Ready"
	I1225 18:29:39.612150   10884 pod_ready.go:86] duration metric: took 399.821041ms for pod "kube-scheduler-addons-335994" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 18:29:39.612161   10884 pod_ready.go:40] duration metric: took 1.603400265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 18:29:39.656860   10884 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 18:29:39.718013   10884 out.go:179] * Done! kubectl is now configured to use "addons-335994" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 25 18:29:41 addons-335994 crio[777]: time="2025-12-25T18:29:41.979471839Z" level=info msg="Starting container: 81ae686eb0b1179489a8a0c4d636c69ec8462b515667172ad344a650abde8b7c" id=5f3c760b-ce9d-4924-a76d-362e6e89498e name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 18:29:41 addons-335994 crio[777]: time="2025-12-25T18:29:41.981184046Z" level=info msg="Started container" PID=6361 containerID=81ae686eb0b1179489a8a0c4d636c69ec8462b515667172ad344a650abde8b7c description=default/busybox/busybox id=5f3c760b-ce9d-4924-a76d-362e6e89498e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f881ab6f597b053c45bcfeb9f24cfa1ce545b328e199a6d0289ea993383412e7
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.410716194Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2/POD" id=a5139a0c-eaec-4b0a-9980-17ecdbefc2ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.410778987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.41726732Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2 Namespace:local-path-storage ID:f7e2112fc7177e63fa4f4f7ad0f92bb2e55f19339c77ed577b0dacd70510c99d UID:34712d37-fcf8-4db7-94de-2d13cd15fda2 NetNS:/var/run/netns/3cc9c00f-40a3-49c2-bb0d-8f188ca6f0c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad70}] Aliases:map[]}"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.417302017Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2 to CNI network \"kindnet\" (type=ptp)"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.427583726Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2 Namespace:local-path-storage ID:f7e2112fc7177e63fa4f4f7ad0f92bb2e55f19339c77ed577b0dacd70510c99d UID:34712d37-fcf8-4db7-94de-2d13cd15fda2 NetNS:/var/run/netns/3cc9c00f-40a3-49c2-bb0d-8f188ca6f0c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad70}] Aliases:map[]}"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.427744546Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2 for CNI network kindnet (type=ptp)"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.429472201Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.431234954Z" level=info msg="Ran pod sandbox f7e2112fc7177e63fa4f4f7ad0f92bb2e55f19339c77ed577b0dacd70510c99d with infra container: local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2/POD" id=a5139a0c-eaec-4b0a-9980-17ecdbefc2ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.432473873Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=7cf06694-0d6a-4a09-91f0-4d40fb3eda94 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.43263753Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=7cf06694-0d6a-4a09-91f0-4d40fb3eda94 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.432687248Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=7cf06694-0d6a-4a09-91f0-4d40fb3eda94 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.433253791Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=fe62a2ac-47e9-4858-bb71-a0ebca7459ac name=/runtime.v1.ImageService/PullImage
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.437947588Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.978079049Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee" id=fe62a2ac-47e9-4858-bb71-a0ebca7459ac name=/runtime.v1.ImageService/PullImage
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.978675068Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=94f1a4bc-903a-4f21-890c-52b880a6aab2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.980357329Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=d0a09563-3a44-4d5a-8f62-89a3d95d0d9c name=/runtime.v1.ImageService/ImageStatus
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.984263542Z" level=info msg="Creating container: local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2/helper-pod" id=368fdca9-4ed4-4fab-acc7-f09e315c7cae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.984409271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.990851463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 18:29:49 addons-335994 crio[777]: time="2025-12-25T18:29:49.99135233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 18:29:50 addons-335994 crio[777]: time="2025-12-25T18:29:50.034662723Z" level=info msg="Created container 2e4248f81ff812375d500c43b063b506ed73d57712f8b3f3e3f49af2647661e2: local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2/helper-pod" id=368fdca9-4ed4-4fab-acc7-f09e315c7cae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 18:29:50 addons-335994 crio[777]: time="2025-12-25T18:29:50.035490049Z" level=info msg="Starting container: 2e4248f81ff812375d500c43b063b506ed73d57712f8b3f3e3f49af2647661e2" id=4576bc63-1846-43d5-b873-5089649086a9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 18:29:50 addons-335994 crio[777]: time="2025-12-25T18:29:50.037498096Z" level=info msg="Started container" PID=6643 containerID=2e4248f81ff812375d500c43b063b506ed73d57712f8b3f3e3f49af2647661e2 description=local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2/helper-pod id=4576bc63-1846-43d5-b873-5089649086a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7e2112fc7177e63fa4f4f7ad0f92bb2e55f19339c77ed577b0dacd70510c99d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	2e4248f81ff81       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            Less than a second ago   Exited              helper-pod                               0                   f7e2112fc7177       helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2   local-path-storage
	81ae686eb0b11       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago            Running             busybox                                  0                   f881ab6f597b0       busybox                                                      default
	ae13d82ab1920       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago           Running             csi-snapshotter                          0                   0ed39553d8b4e       csi-hostpathplugin-zgpkw                                     kube-system
	a564f66aff1c2       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago           Running             csi-provisioner                          0                   0ed39553d8b4e       csi-hostpathplugin-zgpkw                                     kube-system
	fb9aa0d60f0c8       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago           Running             liveness-probe                           0                   0ed39553d8b4e       csi-hostpathplugin-zgpkw                                     kube-system
	d2650d63d689a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 seconds ago           Running             hostpath                                 0                   0ed39553d8b4e       csi-hostpathplugin-zgpkw                                     kube-system
	4128b130074a2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                17 seconds ago           Running             node-driver-registrar                    0                   0ed39553d8b4e       csi-hostpathplugin-zgpkw                                     kube-system
	4bc3e23f30cfb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 17 seconds ago           Running             gcp-auth                                 0                   9b474dce48f5f       gcp-auth-78565c9fb4-2mzfq                                    gcp-auth
	e353304160ee8       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             19 seconds ago           Running             controller                               0                   25ef547baa5ae       ingress-nginx-controller-6cc59ccc48-76scl                    ingress-nginx
	01092dc4476f0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            22 seconds ago           Running             gadget                                   0                   d4802f627cab5       gadget-8js4j                                                 gadget
	33817592eb0db       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago           Running             registry-proxy                           0                   ae2199ea84c18       registry-proxy-4kbxc                                         kube-system
	9b67245ec9b38       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     26 seconds ago           Running             nvidia-device-plugin-ctr                 0                   1a4c9ee6bab8a       nvidia-device-plugin-daemonset-gdrj7                         kube-system
	2919cee4cae67       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     30 seconds ago           Running             amd-gpu-device-plugin                    0                   a079870bb3178       amd-gpu-device-plugin-n5wqv                                  kube-system
	9be78c7231cf4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   30 seconds ago           Exited              patch                                    0                   37a4f268e416e       ingress-nginx-admission-patch-kxd4k                          ingress-nginx
	8fbc3d212062a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   30 seconds ago           Running             csi-external-health-monitor-controller   0                   0ed39553d8b4e       csi-hostpathplugin-zgpkw                                     kube-system
	a10d92f993ff9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago           Running             volume-snapshot-controller               0                   4944ea6310e72       snapshot-controller-7d9fbc56b8-gxt9n                         kube-system
	9d2aac3055073       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   31 seconds ago           Exited              patch                                    0                   71ee68bf85bc1       gcp-auth-certs-patch-rrc42                                   gcp-auth
	0d7c280dc2452       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             31 seconds ago           Running             csi-attacher                             0                   e378b7c625c71       csi-hostpath-attacher-0                                      kube-system
	e00018aa5b33d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago           Running             csi-resizer                              0                   dc6b0f4386a4c       csi-hostpath-resizer-0                                       kube-system
	e8585c6c0c58b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago           Running             volume-snapshot-controller               0                   84692d294aaaa       snapshot-controller-7d9fbc56b8-7w7cv                         kube-system
	2bdc74b32d792       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago           Exited              create                                   0                   6db9da0d9ea4b       gcp-auth-certs-create-mk2q5                                  gcp-auth
	c816a13950e20       ghcr.io/manusa/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                                  35 seconds ago           Running             yakd                                     0                   852fa017b4c03       yakd-dashboard-7896b7cb5b-vq4hd                              yakd-dashboard
	b68e3df89706b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        37 seconds ago           Running             metrics-server                           0                   a7737f58c5ed5       metrics-server-85b7d694d7-gbmzm                              kube-system
	13b6423b58597       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             38 seconds ago           Running             local-path-provisioner                   0                   8158e2ad3a8d9       local-path-provisioner-648f6765c9-25p2m                      local-path-storage
	74c28fdc2b265       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   39 seconds ago           Exited              create                                   0                   88613267c59db       ingress-nginx-admission-create-qdk28                         ingress-nginx
	9e74ae6ae78b4       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           39 seconds ago           Running             registry                                 0                   427203f0601c1       registry-6b586f9694-tkq87                                    kube-system
	c5c8ab56e74b2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               41 seconds ago           Running             minikube-ingress-dns                     0                   e0d3660e9d254       kube-ingress-dns-minikube                                    kube-system
	55c5d59274d75       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               45 seconds ago           Running             cloud-spanner-emulator                   0                   138f228650c6e       cloud-spanner-emulator-85df47b6f4-lr4nx                      default
	eacb0925a485d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago           Running             coredns                                  0                   512982cf5dae6       coredns-66bc5c9577-vq4f4                                     kube-system
	085d0c77def90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago           Running             storage-provisioner                      0                   a5fef98ba8a46       storage-provisioner                                          kube-system
	e2dc79b0850b5       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           59 seconds ago           Running             kindnet-cni                              0                   e9c96d3fedd59       kindnet-pfdzw                                                kube-system
	80a662cb164e4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago       Running             kube-proxy                               0                   2dc4ec9e95a59       kube-proxy-znfvz                                             kube-system
	e3cdb3152d28b       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago       Running             kube-controller-manager                  0                   7033fbc0c5a22       kube-controller-manager-addons-335994                        kube-system
	ccfe4d87852c0       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago       Running             kube-scheduler                           0                   f69705ecee441       kube-scheduler-addons-335994                                 kube-system
	ae5624121adcc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago       Running             etcd                                     0                   1b6fa585464e9       etcd-addons-335994                                           kube-system
	4beb6e0a29121       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago       Running             kube-apiserver                           0                   2ba0c40290c2b       kube-apiserver-addons-335994                                 kube-system
	
	
	==> coredns [eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80] <==
	[INFO] 10.244.0.15:39964 - 32168 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000154152s
	[INFO] 10.244.0.15:33506 - 40798 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114705s
	[INFO] 10.244.0.15:33506 - 41109 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00017982s
	[INFO] 10.244.0.15:37153 - 50433 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000062279s
	[INFO] 10.244.0.15:37153 - 50852 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000109431s
	[INFO] 10.244.0.15:42689 - 43023 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000058802s
	[INFO] 10.244.0.15:42689 - 42729 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000087524s
	[INFO] 10.244.0.15:33892 - 19498 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000043045s
	[INFO] 10.244.0.15:33892 - 19268 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061033s
	[INFO] 10.244.0.15:50958 - 28254 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071164s
	[INFO] 10.244.0.15:50958 - 28493 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101953s
	[INFO] 10.244.0.21:36418 - 57155 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166309s
	[INFO] 10.244.0.21:34424 - 43620 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215618s
	[INFO] 10.244.0.21:50562 - 7440 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111242s
	[INFO] 10.244.0.21:40241 - 26902 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011728s
	[INFO] 10.244.0.21:50759 - 3560 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109879s
	[INFO] 10.244.0.21:40763 - 39584 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155986s
	[INFO] 10.244.0.21:42190 - 29342 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004925561s
	[INFO] 10.244.0.21:43993 - 797 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00614669s
	[INFO] 10.244.0.21:50252 - 21030 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00381491s
	[INFO] 10.244.0.21:33536 - 55761 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005258315s
	[INFO] 10.244.0.21:52604 - 27559 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005202207s
	[INFO] 10.244.0.21:59337 - 42537 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006845156s
	[INFO] 10.244.0.21:34220 - 44286 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001999265s
	[INFO] 10.244.0.21:32975 - 51212 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002037331s
	
	
	==> describe nodes <==
	Name:               addons-335994
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-335994
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=addons-335994
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T18_28_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-335994
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-335994"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 18:28:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-335994
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 18:29:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 18:29:44 +0000   Thu, 25 Dec 2025 18:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 18:29:44 +0000   Thu, 25 Dec 2025 18:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 18:29:44 +0000   Thu, 25 Dec 2025 18:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 18:29:44 +0000   Thu, 25 Dec 2025 18:29:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-335994
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0ccb90a1-765b-4d3d-b4a6-bf46bc694d23
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-85df47b6f4-lr4nx                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-8js4j                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gcp-auth                    gcp-auth-78565c9fb4-2mzfq                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  ingress-nginx               ingress-nginx-controller-6cc59ccc48-76scl                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         61s
	  kube-system                 amd-gpu-device-plugin-n5wqv                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-vq4f4                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     62s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 csi-hostpathplugin-zgpkw                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-335994                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         68s
	  kube-system                 kindnet-pfdzw                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      62s
	  kube-system                 kube-apiserver-addons-335994                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-addons-335994                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-znfvz                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-scheduler-addons-335994                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 metrics-server-85b7d694d7-gbmzm                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         61s
	  kube-system                 nvidia-device-plugin-daemonset-gdrj7                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-tkq87                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 registry-creds-764b6fb674-4gpph                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 registry-proxy-4kbxc                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-7w7cv                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 snapshot-controller-7d9fbc56b8-gxt9n                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  local-path-storage          helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-25p2m                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  yakd-dashboard              yakd-dashboard-7896b7cb5b-vq4hd                               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 68s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s   kubelet          Node addons-335994 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s   kubelet          Node addons-335994 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s   kubelet          Node addons-335994 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           63s   node-controller  Node addons-335994 event: Registered Node addons-335994 in Controller
	  Normal  NodeReady                49s   kubelet          Node addons-335994 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b] <==
	{"level":"warn","ts":"2025-12-25T18:28:39.925938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:39.932674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:39.938865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:39.944884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:39.960313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:39.966751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:39.975188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:40.023609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:51.006656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:28:51.012962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59292","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-25T18:29:13.877177Z","caller":"traceutil/trace.go:172","msg":"trace[1327272122] transaction","detail":"{read_only:false; response_revision:983; number_of_response:1; }","duration":"100.899099ms","start":"2025-12-25T18:29:13.776259Z","end":"2025-12-25T18:29:13.877159Z","steps":["trace[1327272122] 'process raft request'  (duration: 100.779013ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T18:29:13.991976Z","caller":"traceutil/trace.go:172","msg":"trace[1064048464] linearizableReadLoop","detail":"{readStateIndex:1003; appliedIndex:1003; }","duration":"109.079095ms","start":"2025-12-25T18:29:13.882873Z","end":"2025-12-25T18:29:13.991952Z","steps":["trace[1064048464] 'read index received'  (duration: 109.070072ms)","trace[1064048464] 'applied index is now lower than readState.Index'  (duration: 7.59µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T18:29:14.062687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.790168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-qdk28\" limit:1 ","response":"range_response_count:1 size:4960"}
	{"level":"info","ts":"2025-12-25T18:29:14.062770Z","caller":"traceutil/trace.go:172","msg":"trace[1608156480] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-create-qdk28; range_end:; response_count:1; response_revision:983; }","duration":"179.888906ms","start":"2025-12-25T18:29:13.882868Z","end":"2025-12-25T18:29:14.062757Z","steps":["trace[1608156480] 'agreement among raft nodes before linearized reading'  (duration: 109.173668ms)","trace[1608156480] 'range keys from in-memory index tree'  (duration: 70.515431ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T18:29:14.062771Z","caller":"traceutil/trace.go:172","msg":"trace[1298324319] transaction","detail":"{read_only:false; response_revision:984; number_of_response:1; }","duration":"256.795838ms","start":"2025-12-25T18:29:13.805954Z","end":"2025-12-25T18:29:14.062750Z","steps":["trace[1298324319] 'process raft request'  (duration: 186.045022ms)","trace[1298324319] 'compare'  (duration: 70.608779ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T18:29:14.062840Z","caller":"traceutil/trace.go:172","msg":"trace[1138754306] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"179.520685ms","start":"2025-12-25T18:29:13.883308Z","end":"2025-12-25T18:29:14.062829Z","steps":["trace[1138754306] 'process raft request'  (duration: 179.47918ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T18:29:14.063018Z","caller":"traceutil/trace.go:172","msg":"trace[1440957019] transaction","detail":"{read_only:false; response_revision:985; number_of_response:1; }","duration":"180.301331ms","start":"2025-12-25T18:29:13.882704Z","end":"2025-12-25T18:29:14.063005Z","steps":["trace[1440957019] 'process raft request'  (duration: 180.027101ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-25T18:29:14.063143Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.634692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-12-25T18:29:14.063184Z","caller":"traceutil/trace.go:172","msg":"trace[1716387860] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:986; }","duration":"141.680499ms","start":"2025-12-25T18:29:13.921495Z","end":"2025-12-25T18:29:14.063175Z","steps":["trace[1716387860] 'agreement among raft nodes before linearized reading'  (duration: 141.576054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-25T18:29:16.147771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:29:16.154501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:29:16.168149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:29:16.174779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:29:17.222471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.154239ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T18:29:17.222595Z","caller":"traceutil/trace.go:172","msg":"trace[1492077131] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1020; }","duration":"123.285703ms","start":"2025-12-25T18:29:17.099292Z","end":"2025-12-25T18:29:17.222577Z","steps":["trace[1492077131] 'range keys from in-memory index tree'  (duration: 123.133607ms)"],"step_count":1}
	
	
	==> gcp-auth [4bc3e23f30cfb2f45c3f0a7f7a05f918327cb6ca72f94d1ce24aea1888cef54a] <==
	2025/12/25 18:29:32 GCP Auth Webhook started!
	2025/12/25 18:29:40 Ready to marshal response ...
	2025/12/25 18:29:40 Ready to write response ...
	2025/12/25 18:29:40 Ready to marshal response ...
	2025/12/25 18:29:40 Ready to write response ...
	2025/12/25 18:29:40 Ready to marshal response ...
	2025/12/25 18:29:40 Ready to write response ...
	2025/12/25 18:29:49 Ready to marshal response ...
	2025/12/25 18:29:49 Ready to write response ...
	2025/12/25 18:29:49 Ready to marshal response ...
	2025/12/25 18:29:49 Ready to write response ...
	
	
	==> kernel <==
	 18:29:50 up 12 min,  0 user,  load average: 2.39, 0.88, 0.32
	Linux addons-335994 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc] <==
	I1225 18:28:50.819839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 18:28:50.820172       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1225 18:28:50.820324       1 main.go:148] setting mtu 1500 for CNI 
	I1225 18:28:50.820345       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 18:28:50.820362       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T18:28:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 18:28:51.021648       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 18:28:51.021674       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 18:28:51.021685       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 18:28:51.022338       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 18:28:51.522708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 18:28:51.522739       1 metrics.go:72] Registering metrics
	I1225 18:28:51.522800       1 controller.go:711] "Syncing nftables rules"
	I1225 18:29:01.024434       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1225 18:29:01.024545       1 main.go:301] handling current node
	I1225 18:29:11.021673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1225 18:29:11.021729       1 main.go:301] handling current node
	I1225 18:29:21.021758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1225 18:29:21.021821       1 main.go:301] handling current node
	I1225 18:29:31.022074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1225 18:29:31.022121       1 main.go:301] handling current node
	I1225 18:29:41.022313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1225 18:29:41.022348       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a] <==
	E1225 18:29:01.363867       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.246.196:443: connect: connection refused" logger="UnhandledError"
	W1225 18:29:01.363924       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.246.196:443: connect: connection refused
	E1225 18:29:01.363957       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.246.196:443: connect: connection refused" logger="UnhandledError"
	W1225 18:29:01.385824       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.246.196:443: connect: connection refused
	E1225 18:29:01.385866       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.246.196:443: connect: connection refused" logger="UnhandledError"
	W1225 18:29:01.393352       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.246.196:443: connect: connection refused
	E1225 18:29:01.393393       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.246.196:443: connect: connection refused" logger="UnhandledError"
	W1225 18:29:14.802153       1 handler_proxy.go:99] no RequestInfo found in the context
	E1225 18:29:14.802200       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.216.23:443: connect: connection refused" logger="UnhandledError"
	E1225 18:29:14.802230       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1225 18:29:14.802628       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.216.23:443: connect: connection refused" logger="UnhandledError"
	E1225 18:29:14.807707       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.216.23:443: connect: connection refused" logger="UnhandledError"
	E1225 18:29:14.829114       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.216.23:443: connect: connection refused" logger="UnhandledError"
	E1225 18:29:14.870731       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.216.23:443: connect: connection refused" logger="UnhandledError"
	E1225 18:29:14.951371       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.216.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.216.23:443: connect: connection refused" logger="UnhandledError"
	I1225 18:29:15.140639       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1225 18:29:16.147661       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1225 18:29:16.154456       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1225 18:29:16.168128       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1225 18:29:16.174795       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1225 18:29:48.455202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51046: use of closed network connection
	E1225 18:29:48.593776       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51080: use of closed network connection
	
	
	==> kube-controller-manager [e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b] <==
	I1225 18:28:47.431588       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1225 18:28:47.431613       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 18:28:47.431631       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1225 18:28:47.431660       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1225 18:28:47.431700       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1225 18:28:47.431727       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 18:28:47.431773       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1225 18:28:47.431807       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-335994"
	I1225 18:28:47.431828       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1225 18:28:47.431849       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1225 18:28:47.431854       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 18:28:47.432560       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1225 18:28:47.432583       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1225 18:28:47.433705       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1225 18:28:47.435787       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1225 18:28:47.436965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 18:28:47.438626       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 18:28:47.446645       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1225 18:28:47.451401       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 18:29:02.434999       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1225 18:29:17.440981       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1225 18:29:17.441030       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1225 18:29:17.457771       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1225 18:29:17.541857       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 18:29:17.558051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20] <==
	I1225 18:28:49.187805       1 server_linux.go:53] "Using iptables proxy"
	I1225 18:28:49.537083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 18:28:49.638934       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 18:28:49.639040       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1225 18:28:49.639190       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 18:28:49.719002       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 18:28:49.719135       1 server_linux.go:132] "Using iptables Proxier"
	I1225 18:28:49.812884       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 18:28:49.819096       1 server.go:527] "Version info" version="v1.34.3"
	I1225 18:28:49.819245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 18:28:49.821027       1 config.go:200] "Starting service config controller"
	I1225 18:28:49.823448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 18:28:49.822757       1 config.go:106] "Starting endpoint slice config controller"
	I1225 18:28:49.823588       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 18:28:49.822846       1 config.go:309] "Starting node config controller"
	I1225 18:28:49.823661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 18:28:49.823686       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 18:28:49.822771       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 18:28:49.823743       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 18:28:49.923632       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 18:28:49.924003       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 18:28:49.924020       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f] <==
	I1225 18:28:40.706269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1225 18:28:40.707982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1225 18:28:40.709999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 18:28:40.710322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1225 18:28:40.710325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1225 18:28:40.710347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 18:28:40.710360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1225 18:28:40.710403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 18:28:40.710422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1225 18:28:40.710441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 18:28:40.710577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1225 18:28:40.710629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1225 18:28:40.710609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 18:28:40.710681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 18:28:40.710578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1225 18:28:40.710628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1225 18:28:40.710941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 18:28:40.710959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1225 18:28:40.711107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1225 18:28:40.711239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1225 18:28:41.588995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1225 18:28:41.614228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 18:28:41.627104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1225 18:28:41.635941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1225 18:28:42.306519       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 18:29:21 addons-335994 kubelet[1317]: I1225 18:29:21.820591    1317 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37a4f268e416e2e76b5456debcaf7fdb30215bde84ee11296c7890f83d6417a8"
	Dec 25 18:29:21 addons-335994 kubelet[1317]: I1225 18:29:21.820997    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-n5wqv" secret="" err="secret \"gcp-auth\" not found"
	Dec 25 18:29:23 addons-335994 kubelet[1317]: I1225 18:29:23.828797    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gdrj7" secret="" err="secret \"gcp-auth\" not found"
	Dec 25 18:29:23 addons-335994 kubelet[1317]: I1225 18:29:23.838198    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-gdrj7" podStartSLOduration=1.405701551 podStartE2EDuration="22.838177076s" podCreationTimestamp="2025-12-25 18:29:01 +0000 UTC" firstStartedPulling="2025-12-25 18:29:01.818109362 +0000 UTC m=+19.263993602" lastFinishedPulling="2025-12-25 18:29:23.250584896 +0000 UTC m=+40.696469127" observedRunningTime="2025-12-25 18:29:23.837363149 +0000 UTC m=+41.283247397" watchObservedRunningTime="2025-12-25 18:29:23.838177076 +0000 UTC m=+41.284061324"
	Dec 25 18:29:24 addons-335994 kubelet[1317]: I1225 18:29:24.832859    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gdrj7" secret="" err="secret \"gcp-auth\" not found"
	Dec 25 18:29:25 addons-335994 kubelet[1317]: I1225 18:29:25.838475    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4kbxc" secret="" err="secret \"gcp-auth\" not found"
	Dec 25 18:29:25 addons-335994 kubelet[1317]: I1225 18:29:25.849771    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-4kbxc" podStartSLOduration=1.619750529 podStartE2EDuration="24.849753436s" podCreationTimestamp="2025-12-25 18:29:01 +0000 UTC" firstStartedPulling="2025-12-25 18:29:01.881588434 +0000 UTC m=+19.327472663" lastFinishedPulling="2025-12-25 18:29:25.111591333 +0000 UTC m=+42.557475570" observedRunningTime="2025-12-25 18:29:25.849086043 +0000 UTC m=+43.294970287" watchObservedRunningTime="2025-12-25 18:29:25.849753436 +0000 UTC m=+43.295637684"
	Dec 25 18:29:26 addons-335994 kubelet[1317]: I1225 18:29:26.841274    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4kbxc" secret="" err="secret \"gcp-auth\" not found"
	Dec 25 18:29:27 addons-335994 kubelet[1317]: I1225 18:29:27.870482    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-8js4j" podStartSLOduration=20.503054181 podStartE2EDuration="38.870459551s" podCreationTimestamp="2025-12-25 18:28:49 +0000 UTC" firstStartedPulling="2025-12-25 18:29:08.871457569 +0000 UTC m=+26.317341809" lastFinishedPulling="2025-12-25 18:29:27.238862942 +0000 UTC m=+44.684747179" observedRunningTime="2025-12-25 18:29:27.867471144 +0000 UTC m=+45.313355392" watchObservedRunningTime="2025-12-25 18:29:27.870459551 +0000 UTC m=+45.316343799"
	Dec 25 18:29:31 addons-335994 kubelet[1317]: I1225 18:29:31.880789    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6cc59ccc48-76scl" podStartSLOduration=29.439741868 podStartE2EDuration="42.880770681s" podCreationTimestamp="2025-12-25 18:28:49 +0000 UTC" firstStartedPulling="2025-12-25 18:29:17.379731422 +0000 UTC m=+34.825615665" lastFinishedPulling="2025-12-25 18:29:30.820760248 +0000 UTC m=+48.266644478" observedRunningTime="2025-12-25 18:29:31.880151167 +0000 UTC m=+49.326035416" watchObservedRunningTime="2025-12-25 18:29:31.880770681 +0000 UTC m=+49.326654927"
	Dec 25 18:29:32 addons-335994 kubelet[1317]: I1225 18:29:32.889082    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-2mzfq" podStartSLOduration=21.804139436 podStartE2EDuration="36.889061126s" podCreationTimestamp="2025-12-25 18:28:56 +0000 UTC" firstStartedPulling="2025-12-25 18:29:17.392994704 +0000 UTC m=+34.838878944" lastFinishedPulling="2025-12-25 18:29:32.477916386 +0000 UTC m=+49.923800634" observedRunningTime="2025-12-25 18:29:32.886988478 +0000 UTC m=+50.332872726" watchObservedRunningTime="2025-12-25 18:29:32.889061126 +0000 UTC m=+50.334945376"
	Dec 25 18:29:33 addons-335994 kubelet[1317]: E1225 18:29:33.221598    1317 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 25 18:29:33 addons-335994 kubelet[1317]: E1225 18:29:33.221691    1317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d56b065-e2a7-4448-984a-584747a42590-gcr-creds podName:1d56b065-e2a7-4448-984a-584747a42590 nodeName:}" failed. No retries permitted until 2025-12-25 18:30:05.221676267 +0000 UTC m=+82.667560506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/1d56b065-e2a7-4448-984a-584747a42590-gcr-creds") pod "registry-creds-764b6fb674-4gpph" (UID: "1d56b065-e2a7-4448-984a-584747a42590") : secret "registry-creds-gcr" not found
	Dec 25 18:29:35 addons-335994 kubelet[1317]: I1225 18:29:35.668995    1317 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 25 18:29:35 addons-335994 kubelet[1317]: I1225 18:29:35.669049    1317 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 25 18:29:37 addons-335994 kubelet[1317]: I1225 18:29:37.916429    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-zgpkw" podStartSLOduration=1.851960263 podStartE2EDuration="36.916411906s" podCreationTimestamp="2025-12-25 18:29:01 +0000 UTC" firstStartedPulling="2025-12-25 18:29:01.805738287 +0000 UTC m=+19.251622524" lastFinishedPulling="2025-12-25 18:29:36.870189924 +0000 UTC m=+54.316074167" observedRunningTime="2025-12-25 18:29:37.915557701 +0000 UTC m=+55.361441951" watchObservedRunningTime="2025-12-25 18:29:37.916411906 +0000 UTC m=+55.362296154"
	Dec 25 18:29:40 addons-335994 kubelet[1317]: I1225 18:29:40.476704    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f1453947-7f99-4fd0-915d-6261fd847080-gcp-creds\") pod \"busybox\" (UID: \"f1453947-7f99-4fd0-915d-6261fd847080\") " pod="default/busybox"
	Dec 25 18:29:40 addons-335994 kubelet[1317]: I1225 18:29:40.476788    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plzp\" (UniqueName: \"kubernetes.io/projected/f1453947-7f99-4fd0-915d-6261fd847080-kube-api-access-5plzp\") pod \"busybox\" (UID: \"f1453947-7f99-4fd0-915d-6261fd847080\") " pod="default/busybox"
	Dec 25 18:29:42 addons-335994 kubelet[1317]: I1225 18:29:42.935554    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.636709655 podStartE2EDuration="2.93553569s" podCreationTimestamp="2025-12-25 18:29:40 +0000 UTC" firstStartedPulling="2025-12-25 18:29:40.638012151 +0000 UTC m=+58.083896392" lastFinishedPulling="2025-12-25 18:29:41.9368382 +0000 UTC m=+59.382722427" observedRunningTime="2025-12-25 18:29:42.934562817 +0000 UTC m=+60.380447065" watchObservedRunningTime="2025-12-25 18:29:42.93553569 +0000 UTC m=+60.381419937"
	Dec 25 18:29:48 addons-335994 kubelet[1317]: E1225 18:29:48.593683    1317 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52056->127.0.0.1:40807: write tcp 127.0.0.1:52056->127.0.0.1:40807: write: broken pipe
	Dec 25 18:29:48 addons-335994 kubelet[1317]: I1225 18:29:48.639625    1317 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7769c08d-4d9d-471c-8381-e55b77c5be9d" path="/var/lib/kubelet/pods/7769c08d-4d9d-471c-8381-e55b77c5be9d/volumes"
	Dec 25 18:29:49 addons-335994 kubelet[1317]: I1225 18:29:49.241153    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/34712d37-fcf8-4db7-94de-2d13cd15fda2-data\") pod \"helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2\" (UID: \"34712d37-fcf8-4db7-94de-2d13cd15fda2\") " pod="local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2"
	Dec 25 18:29:49 addons-335994 kubelet[1317]: I1225 18:29:49.241213    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfk2k\" (UniqueName: \"kubernetes.io/projected/34712d37-fcf8-4db7-94de-2d13cd15fda2-kube-api-access-bfk2k\") pod \"helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2\" (UID: \"34712d37-fcf8-4db7-94de-2d13cd15fda2\") " pod="local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2"
	Dec 25 18:29:49 addons-335994 kubelet[1317]: I1225 18:29:49.241235    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/34712d37-fcf8-4db7-94de-2d13cd15fda2-script\") pod \"helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2\" (UID: \"34712d37-fcf8-4db7-94de-2d13cd15fda2\") " pod="local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2"
	Dec 25 18:29:49 addons-335994 kubelet[1317]: I1225 18:29:49.241271    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/34712d37-fcf8-4db7-94de-2d13cd15fda2-gcp-creds\") pod \"helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2\" (UID: \"34712d37-fcf8-4db7-94de-2d13cd15fda2\") " pod="local-path-storage/helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2"
	
	
	==> storage-provisioner [085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907] <==
	W1225 18:29:26.114015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:28.117368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:28.122033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:30.125002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:30.129119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:32.132202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:32.136076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:34.139188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:34.144666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:36.147283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:36.152069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:38.155454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:38.158855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:40.163388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:40.166587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:42.169609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:42.173171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:44.176005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:44.179599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:46.181874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:46.186284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:48.189464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:48.192855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:50.196468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 18:29:50.199940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335994 -n addons-335994
helpers_test.go:270: (dbg) Run:  kubectl --context addons-335994 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path ingress-nginx-admission-create-qdk28 ingress-nginx-admission-patch-kxd4k registry-creds-764b6fb674-4gpph helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-335994 describe pod test-local-path ingress-nginx-admission-create-qdk28 ingress-nginx-admission-patch-kxd4k registry-creds-764b6fb674-4gpph helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-335994 describe pod test-local-path ingress-nginx-admission-create-qdk28 ingress-nginx-admission-patch-kxd4k registry-creds-764b6fb674-4gpph helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2: exit status 1 (69.506605ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6qn9l (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6qn9l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qdk28" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kxd4k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-4gpph" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-335994 describe pod test-local-path ingress-nginx-admission-create-qdk28 ingress-nginx-admission-patch-kxd4k registry-creds-764b6fb674-4gpph helper-pod-create-pvc-02dd38e3-0129-4012-abf1-a88532524ad2: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable headlamp --alsologtostderr -v=1: exit status 11 (237.817574ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:29:51.194015   19998 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:29:51.194170   19998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:51.194182   19998 out.go:374] Setting ErrFile to fd 2...
	I1225 18:29:51.194187   19998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:51.194420   19998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:29:51.194700   19998 mustload.go:66] Loading cluster: addons-335994
	I1225 18:29:51.195051   19998 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:51.195069   19998 addons.go:622] checking whether the cluster is paused
	I1225 18:29:51.195172   19998 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:51.195221   19998 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:29:51.195608   19998 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:29:51.213410   19998 ssh_runner.go:195] Run: systemctl --version
	I1225 18:29:51.213477   19998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:29:51.230523   19998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:29:51.319603   19998 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:29:51.319692   19998 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:29:51.348782   19998 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:29:51.348804   19998 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:29:51.348809   19998 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:29:51.348812   19998 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:29:51.348815   19998 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:29:51.348818   19998 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:29:51.348821   19998 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:29:51.348824   19998 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:29:51.348834   19998 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:29:51.348841   19998 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:29:51.348846   19998 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:29:51.348851   19998 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:29:51.348859   19998 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:29:51.348865   19998 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:29:51.348872   19998 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:29:51.348878   19998 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:29:51.348881   19998 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:29:51.348886   19998 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:29:51.348888   19998 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:29:51.348907   19998 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:29:51.348916   19998 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:29:51.348924   19998 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:29:51.348929   19998 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:29:51.348936   19998 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:29:51.348941   19998 cri.go:96] found id: ""
	I1225 18:29:51.348984   19998 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:29:51.363074   19998 out.go:203] 
	W1225 18:29:51.364177   19998 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:29:51.364196   19998 out.go:285] * 
	* 
	W1225 18:29:51.364829   19998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:29:51.365957   19998 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-85df47b6f4-lr4nx" [c3e4fbcd-fd11-4d3c-ab59-09d2876b7274] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002818415s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (234.521422ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:12.879217   23019 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:12.879482   23019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:12.879491   23019 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:12.879496   23019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:12.879678   23019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:12.879928   23019 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:12.880226   23019 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:12.880238   23019 addons.go:622] checking whether the cluster is paused
	I1225 18:30:12.880316   23019 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:12.880331   23019 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:12.880707   23019 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:12.898454   23019 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:12.898516   23019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:12.916261   23019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:13.005540   23019 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:13.005621   23019 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:13.034569   23019 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:13.034595   23019 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:13.034601   23019 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:13.034606   23019 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:13.034611   23019 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:13.034620   23019 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:13.034624   23019 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:13.034628   23019 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:13.034633   23019 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:13.034639   23019 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:13.034642   23019 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:13.034645   23019 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:13.034648   23019 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:13.034651   23019 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:13.034661   23019 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:13.034680   23019 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:13.034688   23019 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:13.034695   23019 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:13.034703   23019 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:13.034707   23019 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:13.034715   23019 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:13.034720   23019 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:13.034727   23019 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:13.034731   23019 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:13.034734   23019 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:13.034739   23019 cri.go:96] found id: ""
	I1225 18:30:13.034776   23019 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:13.048651   23019 out.go:203] 
	W1225 18:30:13.049883   23019 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:13.049924   23019 out.go:285] * 
	* 
	W1225 18:30:13.050637   23019 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:13.051832   23019 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-335994 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-335994 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-335994 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [0a51a2d6-ef8d-47d9-8f82-80b058dd67a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [0a51a2d6-ef8d-47d9-8f82-80b058dd67a1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [0a51a2d6-ef8d-47d9-8f82-80b058dd67a1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002847566s
addons_test.go:969: (dbg) Run:  kubectl --context addons-335994 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 ssh "cat /opt/local-path-provisioner/pvc-02dd38e3-0129-4012-abf1-a88532524ad2_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-335994 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-335994 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (233.970292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:29:56.730845   20493 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:29:56.731023   20493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:56.731033   20493 out.go:374] Setting ErrFile to fd 2...
	I1225 18:29:56.731037   20493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:56.731264   20493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:29:56.731559   20493 mustload.go:66] Loading cluster: addons-335994
	I1225 18:29:56.731971   20493 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:56.731992   20493 addons.go:622] checking whether the cluster is paused
	I1225 18:29:56.732098   20493 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:56.732116   20493 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:29:56.732481   20493 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:29:56.750435   20493 ssh_runner.go:195] Run: systemctl --version
	I1225 18:29:56.750508   20493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:29:56.767693   20493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:29:56.856181   20493 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:29:56.856285   20493 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:29:56.887156   20493 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:29:56.887199   20493 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:29:56.887203   20493 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:29:56.887206   20493 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:29:56.887209   20493 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:29:56.887213   20493 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:29:56.887216   20493 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:29:56.887219   20493 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:29:56.887221   20493 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:29:56.887231   20493 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:29:56.887234   20493 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:29:56.887236   20493 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:29:56.887239   20493 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:29:56.887242   20493 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:29:56.887245   20493 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:29:56.887256   20493 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:29:56.887261   20493 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:29:56.887265   20493 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:29:56.887268   20493 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:29:56.887271   20493 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:29:56.887277   20493 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:29:56.887280   20493 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:29:56.887282   20493 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:29:56.887286   20493 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:29:56.887289   20493 cri.go:96] found id: ""
	I1225 18:29:56.887335   20493 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:29:56.901602   20493 out.go:203] 
	W1225 18:29:56.903079   20493 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:29:56.903098   20493 out.go:285] * 
	* 
	W1225 18:29:56.903738   20493 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:29:56.905145   20493 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-gdrj7" [666e25f3-c012-4fd4-945a-39e959e52731] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003843982s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (242.036602ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:29:53.896299   20204 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:29:53.896573   20204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:53.896583   20204 out.go:374] Setting ErrFile to fd 2...
	I1225 18:29:53.896587   20204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:29:53.896787   20204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:29:53.897095   20204 mustload.go:66] Loading cluster: addons-335994
	I1225 18:29:53.897531   20204 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:53.897552   20204 addons.go:622] checking whether the cluster is paused
	I1225 18:29:53.897678   20204 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:29:53.897698   20204 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:29:53.898148   20204 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:29:53.917441   20204 ssh_runner.go:195] Run: systemctl --version
	I1225 18:29:53.917493   20204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:29:53.936140   20204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:29:54.026354   20204 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:29:54.026439   20204 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:29:54.055325   20204 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:29:54.055355   20204 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:29:54.055362   20204 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:29:54.055367   20204 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:29:54.055372   20204 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:29:54.055377   20204 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:29:54.055382   20204 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:29:54.055386   20204 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:29:54.055390   20204 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:29:54.055402   20204 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:29:54.055405   20204 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:29:54.055408   20204 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:29:54.055410   20204 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:29:54.055413   20204 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:29:54.055416   20204 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:29:54.055426   20204 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:29:54.055430   20204 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:29:54.055442   20204 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:29:54.055451   20204 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:29:54.055456   20204 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:29:54.055463   20204 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:29:54.055468   20204 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:29:54.055476   20204 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:29:54.055480   20204 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:29:54.055488   20204 cri.go:96] found id: ""
	I1225 18:29:54.055538   20204 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:29:54.069547   20204 out.go:203] 
	W1225 18:29:54.070569   20204 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:29:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:29:54.070587   20204 out.go:285] * 
	* 
	W1225 18:29:54.071502   20204 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:29:54.073124   20204 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7896b7cb5b-vq4hd" [c8c24c7c-e45c-46cb-9e58-74d69f19a5a9] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003095868s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable yakd --alsologtostderr -v=1: exit status 11 (232.649177ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:07.634580   22188 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:07.634866   22188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:07.634878   22188 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:07.634885   22188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:07.635105   22188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:07.635381   22188 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:07.635706   22188 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:07.635725   22188 addons.go:622] checking whether the cluster is paused
	I1225 18:30:07.635823   22188 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:07.635862   22188 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:07.636265   22188 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:07.654333   22188 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:07.654398   22188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:07.673272   22188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:07.763399   22188 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:07.763465   22188 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:07.792844   22188 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:07.792879   22188 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:07.792925   22188 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:07.792935   22188 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:07.792941   22188 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:07.792948   22188 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:07.792951   22188 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:07.792955   22188 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:07.792961   22188 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:07.792977   22188 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:07.792984   22188 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:07.792988   22188 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:07.792994   22188 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:07.792998   22188 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:07.793001   22188 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:07.793012   22188 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:07.793017   22188 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:07.793021   22188 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:07.793027   22188 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:07.793030   22188 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:07.793033   22188 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:07.793036   22188 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:07.793039   22188 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:07.793046   22188 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:07.793049   22188 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:07.793055   22188 cri.go:96] found id: ""
	I1225 18:30:07.793098   22188 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:07.807064   22188 out.go:203] 
	W1225 18:30:07.808400   22188 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:07.808428   22188 out.go:285] * 
	* 
	W1225 18:30:07.809142   22188 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:07.810370   22188 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-n5wqv" [ebce9916-ffda-466a-99d9-dc0c42aa7b3c] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003663296s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-335994 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335994 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (227.636268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:30:08.523856   22268 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:30:08.524028   22268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:08.524040   22268 out.go:374] Setting ErrFile to fd 2...
	I1225 18:30:08.524044   22268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:30:08.524224   22268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:30:08.524483   22268 mustload.go:66] Loading cluster: addons-335994
	I1225 18:30:08.524884   22268 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:08.524917   22268 addons.go:622] checking whether the cluster is paused
	I1225 18:30:08.525042   22268 config.go:182] Loaded profile config "addons-335994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:30:08.525071   22268 host.go:66] Checking if "addons-335994" exists ...
	I1225 18:30:08.525504   22268 cli_runner.go:164] Run: docker container inspect addons-335994 --format={{.State.Status}}
	I1225 18:30:08.543532   22268 ssh_runner.go:195] Run: systemctl --version
	I1225 18:30:08.543588   22268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335994
	I1225 18:30:08.560473   22268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/addons-335994/id_rsa Username:docker}
	I1225 18:30:08.649291   22268 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 18:30:08.649379   22268 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 18:30:08.677148   22268 cri.go:96] found id: "17526cde63aa65c4126fa503f16bd14c465678bcb2b913d9c626d26bf26f6a9b"
	I1225 18:30:08.677170   22268 cri.go:96] found id: "ae13d82ab19208f4952cb94c64dec5d732ae1f39f8e0621404c7247137e52a9c"
	I1225 18:30:08.677174   22268 cri.go:96] found id: "a564f66aff1c230f12034368888345290a1ac191db5b257cbd32826875a8ad67"
	I1225 18:30:08.677178   22268 cri.go:96] found id: "fb9aa0d60f0c81e15923123520efc20954a633574450f74ec0ea0a3e90b314c8"
	I1225 18:30:08.677182   22268 cri.go:96] found id: "d2650d63d689ad88a17eeaf98093c607d510fcd6b22a23c2af9efd1f2932e619"
	I1225 18:30:08.677186   22268 cri.go:96] found id: "4128b130074a25d4a8df28170f6846d37ebfd7a2d07a5fc33dab746c82648915"
	I1225 18:30:08.677189   22268 cri.go:96] found id: "33817592eb0db2e6f07a567e2c6a05ce69c1ac649b019e92188ab696db18c932"
	I1225 18:30:08.677192   22268 cri.go:96] found id: "9b67245ec9b381405f30593d867f5e5cbfffaf89edb502cac7c7f5a98858b0ab"
	I1225 18:30:08.677195   22268 cri.go:96] found id: "2919cee4cae672e017d2cc057b52625b032a2c6ef08da6fbf0620796be106460"
	I1225 18:30:08.677200   22268 cri.go:96] found id: "8fbc3d212062a38e6622ed9fbc3f0889258cf6f4e7d4fb14afd72b9fe1b3111f"
	I1225 18:30:08.677204   22268 cri.go:96] found id: "a10d92f993ff9eeafa2fcb2a92dc72f8c14e2c06d1d5bdb76b1599e29961486e"
	I1225 18:30:08.677209   22268 cri.go:96] found id: "0d7c280dc245249ed1f3be62c6c9ae663ce51ccaf65687f39e4e60bd34291ccf"
	I1225 18:30:08.677213   22268 cri.go:96] found id: "e00018aa5b33d1f32fa4e4a0a1d02edaff40699b654dabb056cb2e317b7d6c59"
	I1225 18:30:08.677217   22268 cri.go:96] found id: "e8585c6c0c58b6cd9c9959116cf5a0b20dc858dc19e2352cc6ce199a37e5a7aa"
	I1225 18:30:08.677221   22268 cri.go:96] found id: "b68e3df89706be4b0915e318354d2368e0bf41b39dce1a0641435ee4df7548d2"
	I1225 18:30:08.677232   22268 cri.go:96] found id: "9e74ae6ae78b4dc3b1c93a60da879857955a5f6be8a7782273964ab44b255c66"
	I1225 18:30:08.677237   22268 cri.go:96] found id: "c5c8ab56e74b2b7a6373b1a58b03e6fb619d169b58601995d735e897d9c758ea"
	I1225 18:30:08.677247   22268 cri.go:96] found id: "eacb0925a485dcae72269b51d9663345c1f11632b5013a549a26bf8fb2fb5c80"
	I1225 18:30:08.677255   22268 cri.go:96] found id: "085d0c77def90391d2a114e99f6587e2a0c0a3760dae320144cfaab0961fa907"
	I1225 18:30:08.677260   22268 cri.go:96] found id: "e2dc79b0850b584749fd199f4bbde9ba7b322136a49f4c877b6c309de232e3bc"
	I1225 18:30:08.677270   22268 cri.go:96] found id: "80a662cb164e44deed87cc48e71e68239949d38c2c56a491690a04f800923b20"
	I1225 18:30:08.677277   22268 cri.go:96] found id: "e3cdb3152d28b90fb1def2b45c5dc8a83b7578b628a0c73854286d5ed340874b"
	I1225 18:30:08.677280   22268 cri.go:96] found id: "ccfe4d87852c0e13dcf53f3749926a9e274f59909436cd817078474a0546af7f"
	I1225 18:30:08.677283   22268 cri.go:96] found id: "ae5624121adcc542b0fa7d372b4201440bf41b8429c7c957b8f58572f05dce8b"
	I1225 18:30:08.677286   22268 cri.go:96] found id: "4beb6e0a291214adc57d2a068c0b6283ca02b7da651625ff32c7fa8173b8294a"
	I1225 18:30:08.677288   22268 cri.go:96] found id: ""
	I1225 18:30:08.677332   22268 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:30:08.690758   22268 out.go:203] 
	W1225 18:30:08.691868   22268 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:30:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:30:08.691889   22268 out.go:285] * 
	* 
	W1225 18:30:08.692811   22268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:30:08.694289   22268 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-335994 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-312540 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-312540 --output=json --user=testUser: exit status 80 (1.662054213s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f2ec4ce-63c4-4a03-a039-7097635b0c89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-312540 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d7438a36-0bcf-4cc7-b6b4-c6da0fa4e907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-25T18:45:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"bab60152-f23b-46c8-a577-4ff2342d3c55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-312540 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-312540 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-312540 --output=json --user=testUser: exit status 80 (2.09820966s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdfaeca1-2aea-4175-8939-221a1707146d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-312540 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4fe33853-56f6-4571-8734-0a959922cb7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-25T18:45:11Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"119825dd-c9ea-4cd6-a161-e046e69ea097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-312540 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.10s)

                                                
                                    
x
+
TestPause/serial/Pause (5.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-720311 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-720311 --alsologtostderr -v=5: exit status 80 (2.088902304s)

                                                
                                                
-- stdout --
	* Pausing node pause-720311 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:56:21.890844  200521 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:56:21.890994  200521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:21.891004  200521 out.go:374] Setting ErrFile to fd 2...
	I1225 18:56:21.891011  200521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:21.891199  200521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:56:21.891466  200521 out.go:368] Setting JSON to false
	I1225 18:56:21.891489  200521 mustload.go:66] Loading cluster: pause-720311
	I1225 18:56:21.891908  200521 config.go:182] Loaded profile config "pause-720311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:56:21.892307  200521 cli_runner.go:164] Run: docker container inspect pause-720311 --format={{.State.Status}}
	I1225 18:56:21.913368  200521 host.go:66] Checking if "pause-720311" exists ...
	I1225 18:56:21.913610  200521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:56:21.974448  200521 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 18:56:21.963807049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:56:21.975878  200521 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766570787-22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766570787-22316-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-720311 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1225 18:56:21.981018  200521 out.go:179] * Pausing node pause-720311 ... 
	I1225 18:56:21.982218  200521 host.go:66] Checking if "pause-720311" exists ...
	I1225 18:56:21.982474  200521 ssh_runner.go:195] Run: systemctl --version
	I1225 18:56:21.982523  200521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-720311
	I1225 18:56:22.000718  200521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/pause-720311/id_rsa Username:docker}
	I1225 18:56:22.094785  200521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:56:22.109349  200521 pause.go:52] kubelet running: true
	I1225 18:56:22.109396  200521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 18:56:22.244586  200521 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 18:56:22.244698  200521 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 18:56:22.315655  200521 cri.go:96] found id: "ee960ce66ff058a6e41f7f962f4455a46ea6bdac924d079b7a623f3753679d9b"
	I1225 18:56:22.315670  200521 cri.go:96] found id: "a2904aa2cfe95db6e5592e32f3ace1f7425fbdeeb1a508fcff3e613537c98289"
	I1225 18:56:22.315673  200521 cri.go:96] found id: "021063243ac013ff261196bce21f685dd9e9bb3617953fe9787a2f11319900cd"
	I1225 18:56:22.315676  200521 cri.go:96] found id: "7db894da7184856a5ed52b87a0cf50749a3b9f33a3b9a1ec71162758bee876eb"
	I1225 18:56:22.315679  200521 cri.go:96] found id: "0ff8d5bb771c5b8d4911f9ce99fbf23ccb7accf7fdf9297efd5cae3ca935e25b"
	I1225 18:56:22.315683  200521 cri.go:96] found id: "29a34cf78521a9fdae8bedcf111fe2a71ca43ade40baddac3eb8e95bb8f0d6f7"
	I1225 18:56:22.315687  200521 cri.go:96] found id: "4f901c5416c8ce2d27a266ec4f89feaf3fdc93bea79b0b406be3c90d65d83503"
	I1225 18:56:22.315691  200521 cri.go:96] found id: ""
	I1225 18:56:22.315740  200521 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:56:22.328055  200521 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:56:22Z" level=error msg="open /run/runc: no such file or directory"
	I1225 18:56:22.496420  200521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:56:22.509711  200521 pause.go:52] kubelet running: false
	I1225 18:56:22.509770  200521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 18:56:22.628520  200521 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 18:56:22.628601  200521 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 18:56:22.700740  200521 cri.go:96] found id: "ee960ce66ff058a6e41f7f962f4455a46ea6bdac924d079b7a623f3753679d9b"
	I1225 18:56:22.700761  200521 cri.go:96] found id: "a2904aa2cfe95db6e5592e32f3ace1f7425fbdeeb1a508fcff3e613537c98289"
	I1225 18:56:22.700766  200521 cri.go:96] found id: "021063243ac013ff261196bce21f685dd9e9bb3617953fe9787a2f11319900cd"
	I1225 18:56:22.700771  200521 cri.go:96] found id: "7db894da7184856a5ed52b87a0cf50749a3b9f33a3b9a1ec71162758bee876eb"
	I1225 18:56:22.700773  200521 cri.go:96] found id: "0ff8d5bb771c5b8d4911f9ce99fbf23ccb7accf7fdf9297efd5cae3ca935e25b"
	I1225 18:56:22.700776  200521 cri.go:96] found id: "29a34cf78521a9fdae8bedcf111fe2a71ca43ade40baddac3eb8e95bb8f0d6f7"
	I1225 18:56:22.700779  200521 cri.go:96] found id: "4f901c5416c8ce2d27a266ec4f89feaf3fdc93bea79b0b406be3c90d65d83503"
	I1225 18:56:22.700782  200521 cri.go:96] found id: ""
	I1225 18:56:22.700813  200521 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:56:23.209699  200521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:56:23.224256  200521 pause.go:52] kubelet running: false
	I1225 18:56:23.224434  200521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 18:56:23.340876  200521 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 18:56:23.340970  200521 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 18:56:23.410950  200521 cri.go:96] found id: "ee960ce66ff058a6e41f7f962f4455a46ea6bdac924d079b7a623f3753679d9b"
	I1225 18:56:23.410976  200521 cri.go:96] found id: "a2904aa2cfe95db6e5592e32f3ace1f7425fbdeeb1a508fcff3e613537c98289"
	I1225 18:56:23.410982  200521 cri.go:96] found id: "021063243ac013ff261196bce21f685dd9e9bb3617953fe9787a2f11319900cd"
	I1225 18:56:23.410986  200521 cri.go:96] found id: "7db894da7184856a5ed52b87a0cf50749a3b9f33a3b9a1ec71162758bee876eb"
	I1225 18:56:23.410990  200521 cri.go:96] found id: "0ff8d5bb771c5b8d4911f9ce99fbf23ccb7accf7fdf9297efd5cae3ca935e25b"
	I1225 18:56:23.410995  200521 cri.go:96] found id: "29a34cf78521a9fdae8bedcf111fe2a71ca43ade40baddac3eb8e95bb8f0d6f7"
	I1225 18:56:23.411018  200521 cri.go:96] found id: "4f901c5416c8ce2d27a266ec4f89feaf3fdc93bea79b0b406be3c90d65d83503"
	I1225 18:56:23.411028  200521 cri.go:96] found id: ""
	I1225 18:56:23.411070  200521 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:56:23.708305  200521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:56:23.722109  200521 pause.go:52] kubelet running: false
	I1225 18:56:23.722156  200521 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 18:56:23.830987  200521 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 18:56:23.831060  200521 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 18:56:23.897110  200521 cri.go:96] found id: "ee960ce66ff058a6e41f7f962f4455a46ea6bdac924d079b7a623f3753679d9b"
	I1225 18:56:23.897130  200521 cri.go:96] found id: "a2904aa2cfe95db6e5592e32f3ace1f7425fbdeeb1a508fcff3e613537c98289"
	I1225 18:56:23.897135  200521 cri.go:96] found id: "021063243ac013ff261196bce21f685dd9e9bb3617953fe9787a2f11319900cd"
	I1225 18:56:23.897140  200521 cri.go:96] found id: "7db894da7184856a5ed52b87a0cf50749a3b9f33a3b9a1ec71162758bee876eb"
	I1225 18:56:23.897144  200521 cri.go:96] found id: "0ff8d5bb771c5b8d4911f9ce99fbf23ccb7accf7fdf9297efd5cae3ca935e25b"
	I1225 18:56:23.897149  200521 cri.go:96] found id: "29a34cf78521a9fdae8bedcf111fe2a71ca43ade40baddac3eb8e95bb8f0d6f7"
	I1225 18:56:23.897153  200521 cri.go:96] found id: "4f901c5416c8ce2d27a266ec4f89feaf3fdc93bea79b0b406be3c90d65d83503"
	I1225 18:56:23.897157  200521 cri.go:96] found id: ""
	I1225 18:56:23.897206  200521 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 18:56:23.910268  200521 out.go:203] 
	W1225 18:56:23.911732  200521 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:56:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:56:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 18:56:23.911749  200521 out.go:285] * 
	* 
	W1225 18:56:23.913928  200521 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 18:56:23.915852  200521 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-720311 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-720311
helpers_test.go:244: (dbg) docker inspect pause-720311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae",
	        "Created": "2025-12-25T18:55:40.545827098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 187097,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T18:55:40.602604014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/hosts",
	        "LogPath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae-json.log",
	        "Name": "/pause-720311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-720311:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-720311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae",
	                "LowerDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-720311",
	                "Source": "/var/lib/docker/volumes/pause-720311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-720311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-720311",
	                "name.minikube.sigs.k8s.io": "pause-720311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3012d7fab79dccb9bcb575678d5cf615799851ef87e298ac5d39531452fc7c50",
	            "SandboxKey": "/var/run/docker/netns/3012d7fab79d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-720311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f23bcac1d2c3903d8aa20c6bf8e5adb4ee52801ca08ef1f3738609edea52e42",
	                    "EndpointID": "b1fb4254d06e43a766c79910d7428c2aa9721c58630b5e07a81110d2cb1d0fa9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c6:1b:73:4b:23:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-720311",
	                        "e4864a536fdb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-720311 -n pause-720311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-720311 -n pause-720311: exit status 2 (349.109056ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-720311 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-720311 logs -n 25: (1.016287834s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                 ARGS                                                  │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ -p pause-720311 --alsologtostderr -v=5                                                                │ pause-720311   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl cat docker --no-pager                                                │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo cat /etc/docker/daemon.json                                                    │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo docker system info                                                             │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl status cri-docker --all --full --no-pager                            │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl cat cri-docker --no-pager                                            │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                       │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo cat /usr/lib/systemd/system/cri-docker.service                                 │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo cri-dockerd --version                                                          │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl status containerd --all --full --no-pager                            │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl cat containerd --no-pager                                            │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo cat /lib/systemd/system/containerd.service                                     │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo cat /etc/containerd/config.toml                                                │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo containerd config dump                                                         │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl status crio --all --full --no-pager                                  │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo systemctl cat crio --no-pager                                                  │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                        │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p kubenet-910464 sudo crio config                                                                    │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ delete  │ -p kubenet-910464                                                                                     │ kubenet-910464 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │ 25 Dec 25 18:56 UTC │
	│ start   │ -p false-910464 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio │ false-910464   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p false-910464 sudo cat /etc/nsswitch.conf                                                           │ false-910464   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p false-910464 sudo cat /etc/hosts                                                                   │ false-910464   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p false-910464 sudo cat /etc/resolv.conf                                                             │ false-910464   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p false-910464 sudo crictl pods                                                                      │ false-910464   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	│ ssh     │ -p false-910464 sudo crictl ps --all                                                                  │ false-910464   │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 18:56:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 18:56:23.165141  200995 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:56:23.165237  200995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:23.165242  200995 out.go:374] Setting ErrFile to fd 2...
	I1225 18:56:23.165246  200995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:23.165421  200995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:56:23.165876  200995 out.go:368] Setting JSON to false
	I1225 18:56:23.167137  200995 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2331,"bootTime":1766686652,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:56:23.167191  200995 start.go:143] virtualization: kvm guest
	I1225 18:56:23.169015  200995 out.go:179] * [false-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:56:23.170107  200995 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:56:23.170111  200995 notify.go:221] Checking for updates...
	I1225 18:56:23.172138  200995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:56:23.173213  200995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:56:23.177121  200995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:56:23.178308  200995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:56:23.179464  200995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:56:23.181090  200995 config.go:182] Loaded profile config "NoKubernetes-904366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1225 18:56:23.181290  200995 config.go:182] Loaded profile config "pause-720311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:56:23.181405  200995 config.go:182] Loaded profile config "stopped-upgrade-746190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1225 18:56:23.181510  200995 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:56:23.205227  200995 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:56:23.205354  200995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:56:23.267258  200995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 18:56:23.252824051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:56:23.267411  200995 docker.go:319] overlay module found
	I1225 18:56:23.269413  200995 out.go:179] * Using the docker driver based on user configuration
	I1225 18:56:23.270469  200995 start.go:309] selected driver: docker
	I1225 18:56:23.270483  200995 start.go:928] validating driver "docker" against <nil>
	I1225 18:56:23.270494  200995 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:56:23.275372  200995 out.go:203] 
	W1225 18:56:23.276463  200995 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1225 18:56:23.277491  200995 out.go:203] 
	I1225 18:56:23.232973  196945 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 18:56:23.233013  196945 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1225 18:56:24.206499  195806 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 553bf7552317ad9f9bf527673f84c4ca6e6610645e903ba07325ceb8d467b820 bfcb4eba10bfaf5fc9839b8940bf7dd8ca5a05a999d54b8b1ea10200ce1501ae d99c5329b327c2e61f29123aff910b4d838e44791908efe5eb56490161992b68 f4fd56404d0862a9f7f2fdb8fe3c8f74f13bc83b7f52d9a04e866d4211a0cd02: (17.871993362s)
	I1225 18:56:24.206605  195806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:56:24.225033  195806 out.go:179]   - Kubernetes: Stopped
	
	
	==> CRI-O <==
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.751870871Z" level=info msg="RDT not available in the host system"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.751884148Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.752633437Z" level=info msg="Conmon does support the --sync option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.752648777Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.752660517Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.75333418Z" level=info msg="Conmon does support the --sync option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.75334737Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.75704568Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757064689Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757495251Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757826587Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757872034Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.834676205Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-mcpjn Namespace:kube-system ID:136888f8b9239f432bb9f1c978a0bac5ba8cf5a3b21b494d99cb7f35749cda4c UID:3d326b5f-ad06-4352-8d63-5a95a4791894 NetNS:/var/run/netns/b9b8d9f7-4e81-4e60-a674-05afe6fb82ea Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128700}] Aliases:map[]}"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.834877716Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-mcpjn for CNI network kindnet (type=ptp)"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835317989Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835339325Z" level=info msg="Starting seccomp notifier watcher"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835378346Z" level=info msg="Create NRI interface"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835481933Z" level=info msg="built-in NRI default validator is disabled"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835491573Z" level=info msg="runtime interface created"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835508488Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835518479Z" level=info msg="runtime interface starting up..."
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835525766Z" level=info msg="starting plugins..."
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835538906Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835864485Z" level=info msg="No systemd watchdog enabled"
	Dec 25 18:56:18 pause-720311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ee960ce66ff05       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     11 seconds ago      Running             coredns                   0                   136888f8b9239       coredns-66bc5c9577-mcpjn               kube-system
	a2904aa2cfe95       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   22 seconds ago      Running             kindnet-cni               0                   88418e2116e55       kindnet-s9r7k                          kube-system
	021063243ac01       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     24 seconds ago      Running             kube-proxy                0                   22f5f50fc1c7e       kube-proxy-2r7sc                       kube-system
	7db894da71848       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     34 seconds ago      Running             kube-controller-manager   0                   cc1fd71e38a9d       kube-controller-manager-pause-720311   kube-system
	0ff8d5bb771c5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     34 seconds ago      Running             kube-apiserver            0                   0b99720523e7f       kube-apiserver-pause-720311            kube-system
	29a34cf78521a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     34 seconds ago      Running             etcd                      0                   17157164a4c0a       etcd-pause-720311                      kube-system
	4f901c5416c8c       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     34 seconds ago      Running             kube-scheduler            0                   0f39af99e629b       kube-scheduler-pause-720311            kube-system
	
	
	==> coredns [ee960ce66ff058a6e41f7f962f4455a46ea6bdac924d079b7a623f3753679d9b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55298 - 6574 "HINFO IN 4119510598299851531.9051950715605258099. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.418538587s
	
	
	==> describe nodes <==
	Name:               pause-720311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-720311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=pause-720311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T18_55_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 18:55:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-720311
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 18:56:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:55:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:55:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:55:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:56:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-720311
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                5cd5dddd-5a9c-43ef-b383-6d89754632b0
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mcpjn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-720311                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-s9r7k                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-720311             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-720311    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-2r7sc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-720311             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-720311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-720311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-720311 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-720311 event: Registered Node pause-720311 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-720311 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [29a34cf78521a9fdae8bedcf111fe2a71ca43ade40baddac3eb8e95bb8f0d6f7] <==
	{"level":"warn","ts":"2025-12-25T18:55:51.399461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.408651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.420611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.429599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.443355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.450499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.460482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.470605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.479068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.489217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.499523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.511124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.522342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.542038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.551978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.563389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.572071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.581715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.591736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.603188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.609074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.626758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.639260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.648231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.726855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:56:25 up 38 min,  0 user,  load average: 3.32, 1.66, 1.28
	Linux pause-720311 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a2904aa2cfe95db6e5592e32f3ace1f7425fbdeeb1a508fcff3e613537c98289] <==
	I1225 18:56:03.011550       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 18:56:03.012059       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1225 18:56:03.012230       1 main.go:148] setting mtu 1500 for CNI 
	I1225 18:56:03.012250       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 18:56:03.012270       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T18:56:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 18:56:03.305027       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 18:56:03.305123       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 18:56:03.305141       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 18:56:03.305274       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 18:56:03.705314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 18:56:03.705345       1 metrics.go:72] Registering metrics
	I1225 18:56:03.705419       1 controller.go:711] "Syncing nftables rules"
	I1225 18:56:13.220006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 18:56:13.220090       1 main.go:301] handling current node
	I1225 18:56:23.225061       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 18:56:23.225121       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ff8d5bb771c5b8d4911f9ce99fbf23ccb7accf7fdf9297efd5cae3ca935e25b] <==
	I1225 18:55:52.412509       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 18:55:52.412514       1 cache.go:39] Caches are synced for autoregister controller
	I1225 18:55:52.412691       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 18:55:52.414417       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1225 18:55:52.414739       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:55:52.421962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:55:52.422361       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 18:55:52.618389       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 18:55:53.312314       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1225 18:55:53.316222       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1225 18:55:53.316238       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 18:55:53.752589       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 18:55:53.788285       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 18:55:53.915649       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 18:55:53.924702       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1225 18:55:53.925788       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 18:55:53.929775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 18:55:54.342909       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 18:55:55.082790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 18:55:55.096433       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 18:55:55.106290       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1225 18:56:00.096339       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 18:56:00.401578       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:56:00.408331       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:56:00.445553       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7db894da7184856a5ed52b87a0cf50749a3b9f33a3b9a1ec71162758bee876eb] <==
	I1225 18:55:59.342890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1225 18:55:59.342915       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1225 18:55:59.342944       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1225 18:55:59.343136       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1225 18:55:59.343173       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 18:55:59.343285       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 18:55:59.343335       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1225 18:55:59.343359       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-720311"
	I1225 18:55:59.343402       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1225 18:55:59.343611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1225 18:55:59.343642       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1225 18:55:59.344556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1225 18:55:59.345935       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1225 18:55:59.346721       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1225 18:55:59.346746       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1225 18:55:59.346823       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1225 18:55:59.346878       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1225 18:55:59.346889       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1225 18:55:59.346917       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1225 18:55:59.352247       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 18:55:59.353425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1225 18:55:59.357245       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-720311" podCIDRs=["10.244.0.0/24"]
	I1225 18:55:59.361549       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1225 18:55:59.374976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 18:56:14.362722       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [021063243ac013ff261196bce21f685dd9e9bb3617953fe9787a2f11319900cd] <==
	I1225 18:56:00.897804       1 server_linux.go:53] "Using iptables proxy"
	I1225 18:56:00.983596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 18:56:01.084274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 18:56:01.084367       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1225 18:56:01.084555       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 18:56:01.146116       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 18:56:01.146190       1 server_linux.go:132] "Using iptables Proxier"
	I1225 18:56:01.162437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 18:56:01.163710       1 server.go:527] "Version info" version="v1.34.3"
	I1225 18:56:01.163831       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 18:56:01.169636       1 config.go:200] "Starting service config controller"
	I1225 18:56:01.170381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 18:56:01.170914       1 config.go:309] "Starting node config controller"
	I1225 18:56:01.175048       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 18:56:01.175166       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 18:56:01.171093       1 config.go:106] "Starting endpoint slice config controller"
	I1225 18:56:01.175193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 18:56:01.171107       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 18:56:01.175211       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 18:56:01.270758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 18:56:01.275930       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 18:56:01.275938       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f901c5416c8ce2d27a266ec4f89feaf3fdc93bea79b0b406be3c90d65d83503] <==
	E1225 18:55:52.402368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 18:55:52.402521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1225 18:55:52.402655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1225 18:55:52.402748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1225 18:55:52.403293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 18:55:52.403479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1225 18:55:52.403555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 18:55:52.403631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 18:55:52.403686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 18:55:52.403732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1225 18:55:52.403873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1225 18:55:52.403971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1225 18:55:52.404035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 18:55:52.404109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 18:55:53.263626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 18:55:53.275798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 18:55:53.407971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 18:55:53.412467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1225 18:55:53.447521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 18:55:53.488633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 18:55:53.488842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1225 18:55:53.544928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1225 18:55:53.557052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 18:55:53.584205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1225 18:55:55.498191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 18:55:55 pause-720311 kubelet[1318]: I1225 18:55:55.910803    1318 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 25 18:55:55 pause-720311 kubelet[1318]: I1225 18:55:55.997680    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-720311" podStartSLOduration=0.997634342 podStartE2EDuration="997.634342ms" podCreationTimestamp="2025-12-25 18:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:55.995365701 +0000 UTC m=+1.172300015" watchObservedRunningTime="2025-12-25 18:55:55.997634342 +0000 UTC m=+1.174568650"
	Dec 25 18:55:56 pause-720311 kubelet[1318]: I1225 18:55:56.050769    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-720311" podStartSLOduration=2.050743785 podStartE2EDuration="2.050743785s" podCreationTimestamp="2025-12-25 18:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:56.032691721 +0000 UTC m=+1.209626028" watchObservedRunningTime="2025-12-25 18:55:56.050743785 +0000 UTC m=+1.227678084"
	Dec 25 18:55:56 pause-720311 kubelet[1318]: I1225 18:55:56.064859    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-720311" podStartSLOduration=2.064834508 podStartE2EDuration="2.064834508s" podCreationTimestamp="2025-12-25 18:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:56.051393102 +0000 UTC m=+1.228327397" watchObservedRunningTime="2025-12-25 18:55:56.064834508 +0000 UTC m=+1.241768812"
	Dec 25 18:55:56 pause-720311 kubelet[1318]: I1225 18:55:56.081929    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-720311" podStartSLOduration=1.081890628 podStartE2EDuration="1.081890628s" podCreationTimestamp="2025-12-25 18:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:56.066064186 +0000 UTC m=+1.242998490" watchObservedRunningTime="2025-12-25 18:55:56.081890628 +0000 UTC m=+1.258824933"
	Dec 25 18:55:59 pause-720311 kubelet[1318]: I1225 18:55:59.369254    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 25 18:55:59 pause-720311 kubelet[1318]: I1225 18:55:59.370008    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.649655    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110a90ce-5573-4b28-a6ef-f3eead8b4814-xtables-lock\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.649712    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110a90ce-5573-4b28-a6ef-f3eead8b4814-lib-modules\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650175    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/948d733e-0cf1-4a38-a38a-cb6750dabc83-xtables-lock\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650219    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gxm\" (UniqueName: \"kubernetes.io/projected/948d733e-0cf1-4a38-a38a-cb6750dabc83-kube-api-access-g8gxm\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650250    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lcv\" (UniqueName: \"kubernetes.io/projected/110a90ce-5573-4b28-a6ef-f3eead8b4814-kube-api-access-j4lcv\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650271    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/948d733e-0cf1-4a38-a38a-cb6750dabc83-kube-proxy\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650301    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/110a90ce-5573-4b28-a6ef-f3eead8b4814-cni-cfg\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650333    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/948d733e-0cf1-4a38-a38a-cb6750dabc83-lib-modules\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:01 pause-720311 kubelet[1318]: I1225 18:56:01.004060    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2r7sc" podStartSLOduration=1.004035913 podStartE2EDuration="1.004035913s" podCreationTimestamp="2025-12-25 18:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:56:01.002170894 +0000 UTC m=+6.179105198" watchObservedRunningTime="2025-12-25 18:56:01.004035913 +0000 UTC m=+6.180970217"
	Dec 25 18:56:03 pause-720311 kubelet[1318]: I1225 18:56:03.362867    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s9r7k" podStartSLOduration=1.469956137 podStartE2EDuration="3.362844082s" podCreationTimestamp="2025-12-25 18:56:00 +0000 UTC" firstStartedPulling="2025-12-25 18:56:00.780416251 +0000 UTC m=+5.957350539" lastFinishedPulling="2025-12-25 18:56:02.673304183 +0000 UTC m=+7.850238484" observedRunningTime="2025-12-25 18:56:03.026306171 +0000 UTC m=+8.203240475" watchObservedRunningTime="2025-12-25 18:56:03.362844082 +0000 UTC m=+8.539778385"
	Dec 25 18:56:13 pause-720311 kubelet[1318]: I1225 18:56:13.503377    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 25 18:56:13 pause-720311 kubelet[1318]: I1225 18:56:13.552022    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8v6m\" (UniqueName: \"kubernetes.io/projected/3d326b5f-ad06-4352-8d63-5a95a4791894-kube-api-access-v8v6m\") pod \"coredns-66bc5c9577-mcpjn\" (UID: \"3d326b5f-ad06-4352-8d63-5a95a4791894\") " pod="kube-system/coredns-66bc5c9577-mcpjn"
	Dec 25 18:56:13 pause-720311 kubelet[1318]: I1225 18:56:13.552071    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d326b5f-ad06-4352-8d63-5a95a4791894-config-volume\") pod \"coredns-66bc5c9577-mcpjn\" (UID: \"3d326b5f-ad06-4352-8d63-5a95a4791894\") " pod="kube-system/coredns-66bc5c9577-mcpjn"
	Dec 25 18:56:14 pause-720311 kubelet[1318]: I1225 18:56:14.042515    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mcpjn" podStartSLOduration=14.042492792000001 podStartE2EDuration="14.042492792s" podCreationTimestamp="2025-12-25 18:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:56:14.042474016 +0000 UTC m=+19.219408320" watchObservedRunningTime="2025-12-25 18:56:14.042492792 +0000 UTC m=+19.219427097"
	Dec 25 18:56:22 pause-720311 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 18:56:22 pause-720311 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 18:56:22 pause-720311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 18:56:22 pause-720311 systemd[1]: kubelet.service: Consumed 1.164s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-720311 -n pause-720311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-720311 -n pause-720311: exit status 2 (361.090403ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-720311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-720311
helpers_test.go:244: (dbg) docker inspect pause-720311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae",
	        "Created": "2025-12-25T18:55:40.545827098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 187097,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T18:55:40.602604014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/hosts",
	        "LogPath": "/var/lib/docker/containers/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae/e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae-json.log",
	        "Name": "/pause-720311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-720311:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-720311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4864a536fdb5b8aceb5be52bd2004d426714e42760f81e5768ee6b18221ffae",
	                "LowerDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e1c4f0ab76d02e8450e6b83ec3758f662b4df08ad6b26f9253afb0c89fbcf50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-720311",
	                "Source": "/var/lib/docker/volumes/pause-720311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-720311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-720311",
	                "name.minikube.sigs.k8s.io": "pause-720311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3012d7fab79dccb9bcb575678d5cf615799851ef87e298ac5d39531452fc7c50",
	            "SandboxKey": "/var/run/docker/netns/3012d7fab79d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-720311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0f23bcac1d2c3903d8aa20c6bf8e5adb4ee52801ca08ef1f3738609edea52e42",
	                    "EndpointID": "b1fb4254d06e43a766c79910d7428c2aa9721c58630b5e07a81110d2cb1d0fa9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c6:1b:73:4b:23:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-720311",
	                        "e4864a536fdb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-720311 -n pause-720311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-720311 -n pause-720311: exit status 2 (347.89012ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-720311 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                 ARGS                                                  │       PROFILE       │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -p false-910464 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /etc/nsswitch.conf                                                           │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /etc/hosts                                                                   │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /etc/resolv.conf                                                             │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo crictl pods                                                                      │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo crictl ps --all                                                                  │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                           │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo ip a s                                                                           │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo ip r s                                                                           │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo iptables-save                                                                    │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo iptables -t nat -L -n -v                                                         │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo systemctl status kubelet --all --full --no-pager                                 │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo systemctl cat kubelet --no-pager                                                 │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo journalctl -xeu kubelet --all --full --no-pager                                  │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /var/lib/kubelet/config.yaml                                                 │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo systemctl status docker --all --full --no-pager                                  │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo systemctl cat docker --no-pager                                                  │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /etc/docker/daemon.json                                                      │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ delete  │ -p NoKubernetes-904366                                                                                │ NoKubernetes-904366 │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo docker system info                                                               │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo systemctl status cri-docker --all --full --no-pager                              │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo systemctl cat cri-docker --no-pager                                              │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                         │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cat /usr/lib/systemd/system/cri-docker.service                                   │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	│ ssh     │ -p false-910464 sudo cri-dockerd --version                                                            │ false-910464        │ jenkins │ v1.37.0 │ 25 Dec 25 18:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 18:56:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 18:56:23.165141  200995 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:56:23.165237  200995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:23.165242  200995 out.go:374] Setting ErrFile to fd 2...
	I1225 18:56:23.165246  200995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:23.165421  200995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:56:23.165876  200995 out.go:368] Setting JSON to false
	I1225 18:56:23.167137  200995 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2331,"bootTime":1766686652,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:56:23.167191  200995 start.go:143] virtualization: kvm guest
	I1225 18:56:23.169015  200995 out.go:179] * [false-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:56:23.170107  200995 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:56:23.170111  200995 notify.go:221] Checking for updates...
	I1225 18:56:23.172138  200995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:56:23.173213  200995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:56:23.177121  200995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:56:23.178308  200995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:56:23.179464  200995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:56:23.181090  200995 config.go:182] Loaded profile config "NoKubernetes-904366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1225 18:56:23.181290  200995 config.go:182] Loaded profile config "pause-720311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:56:23.181405  200995 config.go:182] Loaded profile config "stopped-upgrade-746190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1225 18:56:23.181510  200995 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:56:23.205227  200995 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:56:23.205354  200995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:56:23.267258  200995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 18:56:23.252824051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:56:23.267411  200995 docker.go:319] overlay module found
	I1225 18:56:23.269413  200995 out.go:179] * Using the docker driver based on user configuration
	I1225 18:56:23.270469  200995 start.go:309] selected driver: docker
	I1225 18:56:23.270483  200995 start.go:928] validating driver "docker" against <nil>
	I1225 18:56:23.270494  200995 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:56:23.275372  200995 out.go:203] 
	W1225 18:56:23.276463  200995 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1225 18:56:23.277491  200995 out.go:203] 
	I1225 18:56:23.232973  196945 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 18:56:23.233013  196945 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1225 18:56:24.206499  195806 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 553bf7552317ad9f9bf527673f84c4ca6e6610645e903ba07325ceb8d467b820 bfcb4eba10bfaf5fc9839b8940bf7dd8ca5a05a999d54b8b1ea10200ce1501ae d99c5329b327c2e61f29123aff910b4d838e44791908efe5eb56490161992b68 f4fd56404d0862a9f7f2fdb8fe3c8f74f13bc83b7f52d9a04e866d4211a0cd02: (17.871993362s)
	I1225 18:56:24.206605  195806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:56:24.225033  195806 out.go:179]   - Kubernetes: Stopped
	I1225 18:56:24.226857  195806 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 18:56:24.270686  195806 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 18:56:24.275471  195806 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 18:56:24.275546  195806 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 18:56:24.283685  195806 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 18:56:24.283709  195806 start.go:496] detecting cgroup driver to use...
	I1225 18:56:24.283741  195806 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 18:56:24.283788  195806 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 18:56:24.301738  195806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 18:56:24.316401  195806 docker.go:218] disabling cri-docker service (if available) ...
	I1225 18:56:24.316476  195806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 18:56:24.335917  195806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 18:56:24.350467  195806 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 18:56:24.457972  195806 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 18:56:24.560190  195806 docker.go:234] disabling docker service ...
	I1225 18:56:24.560251  195806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 18:56:24.575206  195806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 18:56:24.588560  195806 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 18:56:24.696690  195806 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 18:56:24.805965  195806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 18:56:24.820265  195806 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 18:56:24.835434  195806 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1225 18:56:24.835474  195806 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 18:56:24.835514  195806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:56:24.844823  195806 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 18:56:24.844884  195806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:56:24.854724  195806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:56:24.864973  195806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 18:56:24.875807  195806 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 18:56:24.885759  195806 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 18:56:24.894771  195806 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 18:56:24.903156  195806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 18:56:25.006245  195806 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 18:56:25.184313  195806 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 18:56:25.184386  195806 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 18:56:25.188828  195806 start.go:574] Will wait 60s for crictl version
	I1225 18:56:25.188889  195806 ssh_runner.go:195] Run: which crictl
	I1225 18:56:25.193485  195806 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 18:56:25.222014  195806 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 18:56:25.222102  195806 ssh_runner.go:195] Run: crio --version
	I1225 18:56:25.258991  195806 ssh_runner.go:195] Run: crio --version
	I1225 18:56:25.292041  195806 out.go:179] * Preparing CRI-O 1.34.3 ...
	I1225 18:56:25.293362  195806 ssh_runner.go:195] Run: rm -f paused
	I1225 18:56:25.301334  195806 out.go:179] * Done! minikube is ready without Kubernetes!
	I1225 18:56:25.302972  195806 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.751870871Z" level=info msg="RDT not available in the host system"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.751884148Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.752633437Z" level=info msg="Conmon does support the --sync option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.752648777Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.752660517Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.75333418Z" level=info msg="Conmon does support the --sync option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.75334737Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.75704568Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757064689Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757495251Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757826587Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.757872034Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.834676205Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-mcpjn Namespace:kube-system ID:136888f8b9239f432bb9f1c978a0bac5ba8cf5a3b21b494d99cb7f35749cda4c UID:3d326b5f-ad06-4352-8d63-5a95a4791894 NetNS:/var/run/netns/b9b8d9f7-4e81-4e60-a674-05afe6fb82ea Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128700}] Aliases:map[]}"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.834877716Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-mcpjn for CNI network kindnet (type=ptp)"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835317989Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835339325Z" level=info msg="Starting seccomp notifier watcher"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835378346Z" level=info msg="Create NRI interface"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835481933Z" level=info msg="built-in NRI default validator is disabled"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835491573Z" level=info msg="runtime interface created"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835508488Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835518479Z" level=info msg="runtime interface starting up..."
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835525766Z" level=info msg="starting plugins..."
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835538906Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 25 18:56:18 pause-720311 crio[2201]: time="2025-12-25T18:56:18.835864485Z" level=info msg="No systemd watchdog enabled"
	Dec 25 18:56:18 pause-720311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ee960ce66ff05       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     12 seconds ago      Running             coredns                   0                   136888f8b9239       coredns-66bc5c9577-mcpjn               kube-system
	a2904aa2cfe95       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   24 seconds ago      Running             kindnet-cni               0                   88418e2116e55       kindnet-s9r7k                          kube-system
	021063243ac01       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     25 seconds ago      Running             kube-proxy                0                   22f5f50fc1c7e       kube-proxy-2r7sc                       kube-system
	7db894da71848       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     36 seconds ago      Running             kube-controller-manager   0                   cc1fd71e38a9d       kube-controller-manager-pause-720311   kube-system
	0ff8d5bb771c5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     36 seconds ago      Running             kube-apiserver            0                   0b99720523e7f       kube-apiserver-pause-720311            kube-system
	29a34cf78521a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     36 seconds ago      Running             etcd                      0                   17157164a4c0a       etcd-pause-720311                      kube-system
	4f901c5416c8c       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     36 seconds ago      Running             kube-scheduler            0                   0f39af99e629b       kube-scheduler-pause-720311            kube-system
	
	
	==> coredns [ee960ce66ff058a6e41f7f962f4455a46ea6bdac924d079b7a623f3753679d9b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55298 - 6574 "HINFO IN 4119510598299851531.9051950715605258099. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.418538587s
	
	
	==> describe nodes <==
	Name:               pause-720311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-720311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=pause-720311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T18_55_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 18:55:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-720311
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 18:56:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:55:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:55:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:55:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 18:56:13 +0000   Thu, 25 Dec 2025 18:56:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-720311
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                5cd5dddd-5a9c-43ef-b383-6d89754632b0
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mcpjn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-720311                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-s9r7k                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-720311             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-720311    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-2r7sc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-720311             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-720311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-720311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-720311 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-720311 event: Registered Node pause-720311 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-720311 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [29a34cf78521a9fdae8bedcf111fe2a71ca43ade40baddac3eb8e95bb8f0d6f7] <==
	{"level":"warn","ts":"2025-12-25T18:55:51.399461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.408651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.420611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.429599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.443355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.450499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.460482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.470605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.479068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.489217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.499523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.511124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.522342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.542038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.551978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.563389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.572071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.581715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.591736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.603188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.609074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.626758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.639260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.648231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T18:55:51.726855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:56:26 up 38 min,  0 user,  load average: 3.32, 1.66, 1.28
	Linux pause-720311 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a2904aa2cfe95db6e5592e32f3ace1f7425fbdeeb1a508fcff3e613537c98289] <==
	I1225 18:56:03.011550       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 18:56:03.012059       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1225 18:56:03.012230       1 main.go:148] setting mtu 1500 for CNI 
	I1225 18:56:03.012250       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 18:56:03.012270       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T18:56:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 18:56:03.305027       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 18:56:03.305123       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 18:56:03.305141       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 18:56:03.305274       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 18:56:03.705314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 18:56:03.705345       1 metrics.go:72] Registering metrics
	I1225 18:56:03.705419       1 controller.go:711] "Syncing nftables rules"
	I1225 18:56:13.220006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 18:56:13.220090       1 main.go:301] handling current node
	I1225 18:56:23.225061       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 18:56:23.225121       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ff8d5bb771c5b8d4911f9ce99fbf23ccb7accf7fdf9297efd5cae3ca935e25b] <==
	I1225 18:55:52.412509       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 18:55:52.412514       1 cache.go:39] Caches are synced for autoregister controller
	I1225 18:55:52.412691       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 18:55:52.414417       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1225 18:55:52.414739       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:55:52.421962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:55:52.422361       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 18:55:52.618389       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 18:55:53.312314       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1225 18:55:53.316222       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1225 18:55:53.316238       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 18:55:53.752589       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 18:55:53.788285       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 18:55:53.915649       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 18:55:53.924702       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1225 18:55:53.925788       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 18:55:53.929775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 18:55:54.342909       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 18:55:55.082790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 18:55:55.096433       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 18:55:55.106290       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1225 18:56:00.096339       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 18:56:00.401578       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:56:00.408331       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 18:56:00.445553       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7db894da7184856a5ed52b87a0cf50749a3b9f33a3b9a1ec71162758bee876eb] <==
	I1225 18:55:59.342890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1225 18:55:59.342915       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1225 18:55:59.342944       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1225 18:55:59.343136       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1225 18:55:59.343173       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 18:55:59.343285       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 18:55:59.343335       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1225 18:55:59.343359       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-720311"
	I1225 18:55:59.343402       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1225 18:55:59.343611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1225 18:55:59.343642       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1225 18:55:59.344556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1225 18:55:59.345935       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1225 18:55:59.346721       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1225 18:55:59.346746       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1225 18:55:59.346823       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1225 18:55:59.346878       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1225 18:55:59.346889       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1225 18:55:59.346917       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1225 18:55:59.352247       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 18:55:59.353425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1225 18:55:59.357245       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-720311" podCIDRs=["10.244.0.0/24"]
	I1225 18:55:59.361549       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1225 18:55:59.374976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 18:56:14.362722       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [021063243ac013ff261196bce21f685dd9e9bb3617953fe9787a2f11319900cd] <==
	I1225 18:56:00.897804       1 server_linux.go:53] "Using iptables proxy"
	I1225 18:56:00.983596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 18:56:01.084274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 18:56:01.084367       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1225 18:56:01.084555       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 18:56:01.146116       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 18:56:01.146190       1 server_linux.go:132] "Using iptables Proxier"
	I1225 18:56:01.162437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 18:56:01.163710       1 server.go:527] "Version info" version="v1.34.3"
	I1225 18:56:01.163831       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 18:56:01.169636       1 config.go:200] "Starting service config controller"
	I1225 18:56:01.170381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 18:56:01.170914       1 config.go:309] "Starting node config controller"
	I1225 18:56:01.175048       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 18:56:01.175166       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 18:56:01.171093       1 config.go:106] "Starting endpoint slice config controller"
	I1225 18:56:01.175193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 18:56:01.171107       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 18:56:01.175211       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 18:56:01.270758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 18:56:01.275930       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 18:56:01.275938       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f901c5416c8ce2d27a266ec4f89feaf3fdc93bea79b0b406be3c90d65d83503] <==
	E1225 18:55:52.402368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 18:55:52.402521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1225 18:55:52.402655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1225 18:55:52.402748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1225 18:55:52.403293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 18:55:52.403479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1225 18:55:52.403555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 18:55:52.403631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 18:55:52.403686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 18:55:52.403732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1225 18:55:52.403873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1225 18:55:52.403971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1225 18:55:52.404035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 18:55:52.404109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 18:55:53.263626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 18:55:53.275798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 18:55:53.407971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 18:55:53.412467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1225 18:55:53.447521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 18:55:53.488633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 18:55:53.488842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1225 18:55:53.544928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1225 18:55:53.557052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 18:55:53.584205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1225 18:55:55.498191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 18:55:55 pause-720311 kubelet[1318]: I1225 18:55:55.910803    1318 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 25 18:55:55 pause-720311 kubelet[1318]: I1225 18:55:55.997680    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-720311" podStartSLOduration=0.997634342 podStartE2EDuration="997.634342ms" podCreationTimestamp="2025-12-25 18:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:55.995365701 +0000 UTC m=+1.172300015" watchObservedRunningTime="2025-12-25 18:55:55.997634342 +0000 UTC m=+1.174568650"
	Dec 25 18:55:56 pause-720311 kubelet[1318]: I1225 18:55:56.050769    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-720311" podStartSLOduration=2.050743785 podStartE2EDuration="2.050743785s" podCreationTimestamp="2025-12-25 18:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:56.032691721 +0000 UTC m=+1.209626028" watchObservedRunningTime="2025-12-25 18:55:56.050743785 +0000 UTC m=+1.227678084"
	Dec 25 18:55:56 pause-720311 kubelet[1318]: I1225 18:55:56.064859    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-720311" podStartSLOduration=2.064834508 podStartE2EDuration="2.064834508s" podCreationTimestamp="2025-12-25 18:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:56.051393102 +0000 UTC m=+1.228327397" watchObservedRunningTime="2025-12-25 18:55:56.064834508 +0000 UTC m=+1.241768812"
	Dec 25 18:55:56 pause-720311 kubelet[1318]: I1225 18:55:56.081929    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-720311" podStartSLOduration=1.081890628 podStartE2EDuration="1.081890628s" podCreationTimestamp="2025-12-25 18:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:55:56.066064186 +0000 UTC m=+1.242998490" watchObservedRunningTime="2025-12-25 18:55:56.081890628 +0000 UTC m=+1.258824933"
	Dec 25 18:55:59 pause-720311 kubelet[1318]: I1225 18:55:59.369254    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 25 18:55:59 pause-720311 kubelet[1318]: I1225 18:55:59.370008    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.649655    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110a90ce-5573-4b28-a6ef-f3eead8b4814-xtables-lock\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.649712    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110a90ce-5573-4b28-a6ef-f3eead8b4814-lib-modules\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650175    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/948d733e-0cf1-4a38-a38a-cb6750dabc83-xtables-lock\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650219    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gxm\" (UniqueName: \"kubernetes.io/projected/948d733e-0cf1-4a38-a38a-cb6750dabc83-kube-api-access-g8gxm\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650250    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lcv\" (UniqueName: \"kubernetes.io/projected/110a90ce-5573-4b28-a6ef-f3eead8b4814-kube-api-access-j4lcv\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650271    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/948d733e-0cf1-4a38-a38a-cb6750dabc83-kube-proxy\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650301    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/110a90ce-5573-4b28-a6ef-f3eead8b4814-cni-cfg\") pod \"kindnet-s9r7k\" (UID: \"110a90ce-5573-4b28-a6ef-f3eead8b4814\") " pod="kube-system/kindnet-s9r7k"
	Dec 25 18:56:00 pause-720311 kubelet[1318]: I1225 18:56:00.650333    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/948d733e-0cf1-4a38-a38a-cb6750dabc83-lib-modules\") pod \"kube-proxy-2r7sc\" (UID: \"948d733e-0cf1-4a38-a38a-cb6750dabc83\") " pod="kube-system/kube-proxy-2r7sc"
	Dec 25 18:56:01 pause-720311 kubelet[1318]: I1225 18:56:01.004060    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2r7sc" podStartSLOduration=1.004035913 podStartE2EDuration="1.004035913s" podCreationTimestamp="2025-12-25 18:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:56:01.002170894 +0000 UTC m=+6.179105198" watchObservedRunningTime="2025-12-25 18:56:01.004035913 +0000 UTC m=+6.180970217"
	Dec 25 18:56:03 pause-720311 kubelet[1318]: I1225 18:56:03.362867    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s9r7k" podStartSLOduration=1.469956137 podStartE2EDuration="3.362844082s" podCreationTimestamp="2025-12-25 18:56:00 +0000 UTC" firstStartedPulling="2025-12-25 18:56:00.780416251 +0000 UTC m=+5.957350539" lastFinishedPulling="2025-12-25 18:56:02.673304183 +0000 UTC m=+7.850238484" observedRunningTime="2025-12-25 18:56:03.026306171 +0000 UTC m=+8.203240475" watchObservedRunningTime="2025-12-25 18:56:03.362844082 +0000 UTC m=+8.539778385"
	Dec 25 18:56:13 pause-720311 kubelet[1318]: I1225 18:56:13.503377    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 25 18:56:13 pause-720311 kubelet[1318]: I1225 18:56:13.552022    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8v6m\" (UniqueName: \"kubernetes.io/projected/3d326b5f-ad06-4352-8d63-5a95a4791894-kube-api-access-v8v6m\") pod \"coredns-66bc5c9577-mcpjn\" (UID: \"3d326b5f-ad06-4352-8d63-5a95a4791894\") " pod="kube-system/coredns-66bc5c9577-mcpjn"
	Dec 25 18:56:13 pause-720311 kubelet[1318]: I1225 18:56:13.552071    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d326b5f-ad06-4352-8d63-5a95a4791894-config-volume\") pod \"coredns-66bc5c9577-mcpjn\" (UID: \"3d326b5f-ad06-4352-8d63-5a95a4791894\") " pod="kube-system/coredns-66bc5c9577-mcpjn"
	Dec 25 18:56:14 pause-720311 kubelet[1318]: I1225 18:56:14.042515    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mcpjn" podStartSLOduration=14.042492792000001 podStartE2EDuration="14.042492792s" podCreationTimestamp="2025-12-25 18:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 18:56:14.042474016 +0000 UTC m=+19.219408320" watchObservedRunningTime="2025-12-25 18:56:14.042492792 +0000 UTC m=+19.219427097"
	Dec 25 18:56:22 pause-720311 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 18:56:22 pause-720311 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 18:56:22 pause-720311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 18:56:22 pause-720311 systemd[1]: kubelet.service: Consumed 1.164s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-720311 -n pause-720311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-720311 -n pause-720311: exit status 2 (357.723796ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-720311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (279.957893ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:01:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-163446 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-163446 describe deploy/metrics-server -n kube-system: exit status 1 (70.503357ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-163446 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-163446
helpers_test.go:244: (dbg) docker inspect old-k8s-version-163446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05",
	        "Created": "2025-12-25T19:00:38.731521693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258527,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:00:38.772579228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/hostname",
	        "HostsPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/hosts",
	        "LogPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05-json.log",
	        "Name": "/old-k8s-version-163446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163446:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-163446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05",
	                "LowerDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163446",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163446",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3c3184d9bf38e3bd29a15e3a96216ddd99cf820ca4430ddd9626bc363a586d62",
	            "SandboxKey": "/var/run/docker/netns/3c3184d9bf38",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-163446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6b6e067d0596f86d64c9b68f4f95f2e3f9026a738d9a6486ac091374c416820",
	                    "EndpointID": "382d186676f55aabe4b49718b0378f7cbb12f2a9a1facc68c5b54b795b4fa9a9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "56:17:4e:3c:8f:d2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-163446",
	                        "37396ae2407e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163446 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-163446 logs -n 25: (1.068334493s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ force-systemd-flag-000275 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-000275 │ jenkins │ v1.37.0 │ 25 Dec 25 18:57 UTC │ 25 Dec 25 18:57 UTC │
	│ delete  │ -p force-systemd-flag-000275                                                                                                                                                                                                                  │ force-systemd-flag-000275 │ jenkins │ v1.37.0 │ 25 Dec 25 18:57 UTC │ 25 Dec 25 18:57 UTC │
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 18:57 UTC │ 25 Dec 25 18:57 UTC │
	│ delete  │ -p missing-upgrade-122711                                                                                                                                                                                                                     │ missing-upgrade-122711    │ jenkins │ v1.37.0 │ 25 Dec 25 18:57 UTC │ 25 Dec 25 18:58 UTC │
	│ start   │ -p cert-options-026286 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ ssh     │ cert-options-026286 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ ssh     │ -p cert-options-026286 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ delete  │ -p cert-options-026286                                                                                                                                                                                                                        │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ start   │ -p test-preload-632730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:59 UTC │
	│ image   │ test-preload-632730 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 18:59 UTC │
	│ stop    │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 18:59 UTC │
	│ start   │ -p test-preload-632730 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 19:00 UTC │
	│ image   │ test-preload-632730 image list                                                                                                                                                                                                                │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p stopped-upgrade-746190                                                                                                                                                                                                                     │ stopped-upgrade-746190    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ stop    │ -p kubernetes-upgrade-498224 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                 │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │                     │
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:01:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:01:24.498578  270844 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:01:24.498878  270844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:01:24.498911  270844 out.go:374] Setting ErrFile to fd 2...
	I1225 19:01:24.498919  270844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:01:24.499201  270844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:01:24.499832  270844 out.go:368] Setting JSON to false
	I1225 19:01:24.501555  270844 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2632,"bootTime":1766686652,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:01:24.501638  270844 start.go:143] virtualization: kvm guest
	I1225 19:01:24.505411  270844 out.go:179] * [embed-certs-684693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:01:24.506785  270844 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:01:24.506801  270844 notify.go:221] Checking for updates...
	I1225 19:01:24.509427  270844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:01:24.510804  270844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:24.512171  270844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:01:24.516253  270844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:01:24.517961  270844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:01:24.519871  270844 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:01:24.520023  270844 config.go:182] Loaded profile config "no-preload-148352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:01:24.520148  270844 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:24.520311  270844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:01:24.559981  270844 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:01:24.560091  270844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:01:24.640132  270844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-25 19:01:24.626503682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:01:24.640288  270844 docker.go:319] overlay module found
	I1225 19:01:24.643030  270844 out.go:179] * Using the docker driver based on user configuration
	I1225 19:01:24.644440  270844 start.go:309] selected driver: docker
	I1225 19:01:24.644456  270844 start.go:928] validating driver "docker" against <nil>
	I1225 19:01:24.644470  270844 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:01:24.645243  270844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:01:24.708471  270844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-25 19:01:24.697541273 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:01:24.708729  270844 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:01:24.709033  270844 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:01:24.710963  270844 out.go:179] * Using Docker driver with root privileges
	I1225 19:01:24.712331  270844 cni.go:84] Creating CNI manager for ""
	I1225 19:01:24.712392  270844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:01:24.712402  270844 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:01:24.712478  270844 start.go:353] cluster config:
	{Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:24.714003  270844 out.go:179] * Starting "embed-certs-684693" primary control-plane node in "embed-certs-684693" cluster
	I1225 19:01:24.715280  270844 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:01:24.716407  270844 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:01:24.717525  270844 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:01:24.717568  270844 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:01:24.717583  270844 cache.go:65] Caching tarball of preloaded images
	I1225 19:01:24.717607  270844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:01:24.717673  270844 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:01:24.717690  270844 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:01:24.717845  270844 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/config.json ...
	I1225 19:01:24.717877  270844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/config.json: {Name:mk3a16ba31e84703464fd4ddefa0f3b57647e42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:24.741351  270844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:01:24.741372  270844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:01:24.741392  270844 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:01:24.741429  270844 start.go:360] acquireMachinesLock for embed-certs-684693: {Name:mkcef018e2fd6119543ae4deda4e408dabf7b389 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:01:24.741535  270844 start.go:364] duration metric: took 86.006µs to acquireMachinesLock for "embed-certs-684693"
	I1225 19:01:24.741563  270844 start.go:93] Provisioning new machine with config: &{Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:01:24.741643  270844 start.go:125] createHost starting for "" (driver="docker")
	I1225 19:01:20.430733  265912 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.558665319s)
	I1225 19:01:20.430766  265912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-5579/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1225 19:01:20.430791  265912 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.53115679s)
	I1225 19:01:20.430809  265912 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1225 19:01:20.430822  265912 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1225 19:01:20.430848  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1225 19:01:20.430864  265912 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1225 19:01:21.782181  265912 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.351295461s)
	I1225 19:01:21.782218  265912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-5579/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1225 19:01:21.782242  265912 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1225 19:01:21.782278  265912 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1225 19:01:23.849323  265912 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (2.067022117s)
	I1225 19:01:23.849353  265912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-5579/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1225 19:01:23.849380  265912 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1225 19:01:23.849434  265912 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1225 19:01:24.426335  265912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-5579/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1225 19:01:24.426375  265912 cache_images.go:125] Successfully loaded all cached images
	I1225 19:01:24.426379  265912 cache_images.go:94] duration metric: took 10.346648414s to LoadCachedImages
	I1225 19:01:24.426390  265912 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1225 19:01:24.426514  265912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-148352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-148352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:01:24.426589  265912 ssh_runner.go:195] Run: crio config
	I1225 19:01:24.475338  265912 cni.go:84] Creating CNI manager for ""
	I1225 19:01:24.475358  265912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:01:24.475372  265912 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:01:24.475401  265912 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-148352 NodeName:no-preload-148352 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:01:24.475558  265912 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-148352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:01:24.475631  265912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1225 19:01:24.484204  265912 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1225 19:01:24.484266  265912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1225 19:01:24.493347  265912 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256
	I1225 19:01:24.493418  265912 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22301-5579/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm
	I1225 19:01:24.493447  265912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1225 19:01:24.493460  265912 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22301-5579/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet
	I1225 19:01:24.497400  265912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1225 19:01:24.497422  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1225 19:01:25.199903  260034 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c: (14.05021762s)
	I1225 19:01:25.199988  260034 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1225 19:01:25.246724  260034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:01:25.255700  260034 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 25 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 25 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 25 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 25 19:00 /etc/kubernetes/scheduler.conf
	
	I1225 19:01:25.255764  260034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1225 19:01:25.263550  260034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1225 19:01:25.271429  260034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1225 19:01:25.279265  260034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:01:25.279328  260034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:01:25.367229  260034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1225 19:01:25.378080  260034 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:01:25.378161  260034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:01:25.391939  260034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:01:25.404284  260034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 19:01:25.460344  260034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 19:01:26.138455  260034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1225 19:01:26.352278  260034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1225 19:01:26.406374  260034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1225 19:01:26.467140  260034 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:01:26.467222  260034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:01:26.967359  260034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:01:26.982470  260034 api_server.go:72] duration metric: took 515.344216ms to wait for apiserver process to appear ...
	I1225 19:01:26.982498  260034 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:01:26.982519  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:01:26.982929  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:01:27.482603  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:01:24.743824  270844 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:01:24.744078  270844 start.go:159] libmachine.API.Create for "embed-certs-684693" (driver="docker")
	I1225 19:01:24.744113  270844 client.go:173] LocalClient.Create starting
	I1225 19:01:24.744175  270844 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:01:24.744206  270844 main.go:144] libmachine: Decoding PEM data...
	I1225 19:01:24.744225  270844 main.go:144] libmachine: Parsing certificate...
	I1225 19:01:24.744279  270844 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:01:24.744308  270844 main.go:144] libmachine: Decoding PEM data...
	I1225 19:01:24.744331  270844 main.go:144] libmachine: Parsing certificate...
	I1225 19:01:24.744674  270844 cli_runner.go:164] Run: docker network inspect embed-certs-684693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:01:24.764201  270844 cli_runner.go:211] docker network inspect embed-certs-684693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:01:24.764287  270844 network_create.go:284] running [docker network inspect embed-certs-684693] to gather additional debugging logs...
	I1225 19:01:24.764311  270844 cli_runner.go:164] Run: docker network inspect embed-certs-684693
	W1225 19:01:24.785626  270844 cli_runner.go:211] docker network inspect embed-certs-684693 returned with exit code 1
	I1225 19:01:24.785654  270844 network_create.go:287] error running [docker network inspect embed-certs-684693]: docker network inspect embed-certs-684693: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-684693 not found
	I1225 19:01:24.785670  270844 network_create.go:289] output of [docker network inspect embed-certs-684693]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-684693 not found
	
	** /stderr **
	I1225 19:01:24.785775  270844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:01:24.809282  270844 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:01:24.810349  270844 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:01:24.811423  270844 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:01:24.812462  270844 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f04480}
	I1225 19:01:24.812487  270844 network_create.go:124] attempt to create docker network embed-certs-684693 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1225 19:01:24.812545  270844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-684693 embed-certs-684693
	I1225 19:01:24.872505  270844 network_create.go:108] docker network embed-certs-684693 192.168.76.0/24 created
	I1225 19:01:24.872537  270844 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-684693" container
	I1225 19:01:24.872609  270844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:01:24.894779  270844 cli_runner.go:164] Run: docker volume create embed-certs-684693 --label name.minikube.sigs.k8s.io=embed-certs-684693 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:01:24.914990  270844 oci.go:103] Successfully created a docker volume embed-certs-684693
	I1225 19:01:24.915075  270844 cli_runner.go:164] Run: docker run --rm --name embed-certs-684693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-684693 --entrypoint /usr/bin/test -v embed-certs-684693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:01:26.041678  270844 cli_runner.go:217] Completed: docker run --rm --name embed-certs-684693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-684693 --entrypoint /usr/bin/test -v embed-certs-684693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.126549863s)
	I1225 19:01:26.041717  270844 oci.go:107] Successfully prepared a docker volume embed-certs-684693
	I1225 19:01:26.041772  270844 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:01:26.041789  270844 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:01:26.041871  270844 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-684693:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:01:25.633023  265912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:01:25.649045  265912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1225 19:01:25.654538  265912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1225 19:01:25.654575  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1225 19:01:25.699030  265912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1225 19:01:25.706059  265912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1225 19:01:25.706099  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1225 19:01:26.018560  265912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:01:26.027383  265912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1225 19:01:26.041451  265912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 19:01:26.060423  265912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1225 19:01:26.075301  265912 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:01:26.079914  265912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:01:26.102128  265912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:26.220612  265912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:01:26.249252  265912 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352 for IP: 192.168.85.2
	I1225 19:01:26.249279  265912 certs.go:195] generating shared ca certs ...
	I1225 19:01:26.249300  265912 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.249460  265912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:01:26.249527  265912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:01:26.249545  265912 certs.go:257] generating profile certs ...
	I1225 19:01:26.249627  265912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.key
	I1225 19:01:26.249651  265912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt with IP's: []
	I1225 19:01:26.358467  265912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt ...
	I1225 19:01:26.358506  265912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt: {Name:mkd9fdd510e96a6284e54043705ce8631c2a9f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.358713  265912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.key ...
	I1225 19:01:26.358732  265912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.key: {Name:mk8a6dc56f7a4059c312266fea8de42d93dfda69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.358858  265912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.key.adef9d81
	I1225 19:01:26.358882  265912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.crt.adef9d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1225 19:01:26.536936  265912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.crt.adef9d81 ...
	I1225 19:01:26.536969  265912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.crt.adef9d81: {Name:mk7e9b0427740b34334131554879e9de8f1dbc0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.537153  265912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.key.adef9d81 ...
	I1225 19:01:26.537174  265912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.key.adef9d81: {Name:mkb60bec4ef9728f7a3ccd8f72a91f01055d4b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.537283  265912 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.crt.adef9d81 -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.crt
	I1225 19:01:26.537380  265912 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.key.adef9d81 -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.key
	I1225 19:01:26.537465  265912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.key
	I1225 19:01:26.537492  265912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.crt with IP's: []
	I1225 19:01:26.678472  265912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.crt ...
	I1225 19:01:26.678502  265912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.crt: {Name:mk74fa36e013f810e9aea5cb6002d9d4328525c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.678689  265912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.key ...
	I1225 19:01:26.678708  265912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.key: {Name:mk4d7c43b463129538beeee60ea4c59a7410a43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:26.678973  265912 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:01:26.679033  265912 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:01:26.679047  265912 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:01:26.679159  265912 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:01:26.679204  265912 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:01:26.679244  265912 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:01:26.679308  265912 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:01:26.679951  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:01:26.698634  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:01:26.716251  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:01:26.734204  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:01:26.754472  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1225 19:01:26.772518  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:01:26.790735  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:01:26.808517  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:01:26.826455  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:01:26.847330  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:01:26.866810  265912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:01:26.890120  265912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:01:26.905607  265912 ssh_runner.go:195] Run: openssl version
	I1225 19:01:26.912784  265912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:26.923279  265912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:01:26.933378  265912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:26.938975  265912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:26.939059  265912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:26.987836  265912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:01:26.998788  265912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 19:01:27.006560  265912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:01:27.014009  265912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:01:27.021738  265912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:01:27.025776  265912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:01:27.025827  265912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:01:27.061673  265912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:01:27.070813  265912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9112.pem /etc/ssl/certs/51391683.0
	I1225 19:01:27.079354  265912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:01:27.088178  265912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:01:27.097359  265912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:01:27.102237  265912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:01:27.102302  265912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:01:27.141507  265912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:01:27.149931  265912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91122.pem /etc/ssl/certs/3ec20f2e.0
	I1225 19:01:27.158595  265912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:01:27.163121  265912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 19:01:27.163185  265912 kubeadm.go:401] StartCluster: {Name:no-preload-148352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-148352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:27.163287  265912 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:01:27.163345  265912 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:01:27.199843  265912 cri.go:96] found id: ""
	I1225 19:01:27.199942  265912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:01:27.210059  265912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:01:27.218578  265912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 19:01:27.218646  265912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:01:27.227162  265912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 19:01:27.227189  265912 kubeadm.go:158] found existing configuration files:
	
	I1225 19:01:27.227241  265912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1225 19:01:27.235752  265912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 19:01:27.235814  265912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 19:01:27.244032  265912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1225 19:01:27.252294  265912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 19:01:27.252357  265912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 19:01:27.260040  265912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1225 19:01:27.268100  265912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 19:01:27.268168  265912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:01:27.276246  265912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1225 19:01:27.285090  265912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 19:01:27.285173  265912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:01:27.293493  265912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 19:01:27.412724  265912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 19:01:27.486700  265912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 19:01:32.482993  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:01:32.483041  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 25 19:01:20 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:20.558872358Z" level=info msg="Started container" PID=2138 containerID=e7eecb004fae5bc8185816120d4e1b06254cd40704e60885c6afb82130a8cb6a description=kube-system/storage-provisioner/storage-provisioner id=6d0b23a1-f23e-4a79-9203-f70d5d83ad86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d802d7838abe0d3fe688e209885f84a51f0be1bbe164686d4825266c73532119
	Dec 25 19:01:20 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:20.559522594Z" level=info msg="Started container" PID=2139 containerID=216f2d929f370a944489ae72b3ec44ac830a5daa27086e13233dfcae6d833fb6 description=kube-system/coredns-5dd5756b68-chdzr/coredns id=0ebbddd0-a76c-422e-9846-8537c6a32008 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3210232e79970361f8b9a32b35c83abb2f2618011546952fd6772489ab5d46f4
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.11960927Z" level=info msg="Running pod sandbox: default/busybox/POD" id=777e4330-cb22-4aeb-9211-6f40b138653e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.11970639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.125609432Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e5b51092495ee59e69e7343b05a316ef64b7041aeebb0235be5361dbf6e17e66 UID:d7ba23a7-2bd3-4170-952b-a664e8b82355 NetNS:/var/run/netns/8d3fbad2-3e4d-45e4-ba79-fa045925b600 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000630290}] Aliases:map[]}"
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.125638787Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.135483212Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e5b51092495ee59e69e7343b05a316ef64b7041aeebb0235be5361dbf6e17e66 UID:d7ba23a7-2bd3-4170-952b-a664e8b82355 NetNS:/var/run/netns/8d3fbad2-3e4d-45e4-ba79-fa045925b600 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000630290}] Aliases:map[]}"
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.135614776Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.136318149Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.137119852Z" level=info msg="Ran pod sandbox e5b51092495ee59e69e7343b05a316ef64b7041aeebb0235be5361dbf6e17e66 with infra container: default/busybox/POD" id=777e4330-cb22-4aeb-9211-6f40b138653e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.13836248Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f2bf2cb-0e08-4b24-9713-7e339124474e name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.138481973Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f2bf2cb-0e08-4b24-9713-7e339124474e name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.138551132Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0f2bf2cb-0e08-4b24-9713-7e339124474e name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.139113559Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=66bb0adb-0064-41f8-9db3-989b93522c2a name=/runtime.v1.ImageService/PullImage
	Dec 25 19:01:24 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:24.14222399Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.646432942Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=66bb0adb-0064-41f8-9db3-989b93522c2a name=/runtime.v1.ImageService/PullImage
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.648187779Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3c46822d-948a-4357-b00f-e3252c8e7a5f name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.651118588Z" level=info msg="Creating container: default/busybox/busybox" id=7c3daea8-0b51-49ba-941e-513d9847bdd3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.651274109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.656670573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.657384124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.682812815Z" level=info msg="Created container a58d13e3ac82dc23f34fc5ea110dc2c0e9dc004ef080474967df0505290f8441: default/busybox/busybox" id=7c3daea8-0b51-49ba-941e-513d9847bdd3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.684005811Z" level=info msg="Starting container: a58d13e3ac82dc23f34fc5ea110dc2c0e9dc004ef080474967df0505290f8441" id=02e66a2e-32e7-4dd8-af35-afa55ed83331 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:01:25 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:25.686283986Z" level=info msg="Started container" PID=2212 containerID=a58d13e3ac82dc23f34fc5ea110dc2c0e9dc004ef080474967df0505290f8441 description=default/busybox/busybox id=02e66a2e-32e7-4dd8-af35-afa55ed83331 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e5b51092495ee59e69e7343b05a316ef64b7041aeebb0235be5361dbf6e17e66
	Dec 25 19:01:32 old-k8s-version-163446 crio[784]: time="2025-12-25T19:01:32.921630573Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	a58d13e3ac82d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   e5b51092495ee       busybox                                          default
	216f2d929f370       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   3210232e79970       coredns-5dd5756b68-chdzr                         kube-system
	e7eecb004fae5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   d802d7838abe0       storage-provisioner                              kube-system
	6b0968eb8b6db       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    25 seconds ago      Running             kindnet-cni               0                   f8b6ef3f81dcc       kindnet-krjfj                                    kube-system
	72231efea1b70       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   5618475d5bf97       kube-proxy-mxztf                                 kube-system
	240f10ea60a6d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   9f1b2ecabc108       etcd-old-k8s-version-163446                      kube-system
	5fbf653b4250c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   0c6667bb2cbc3       kube-apiserver-old-k8s-version-163446            kube-system
	f48296d303282       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   a5cc3fd95dba0       kube-controller-manager-old-k8s-version-163446   kube-system
	76eec300352d0       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   5b3345acc122b       kube-scheduler-old-k8s-version-163446            kube-system
	
	
	==> coredns [216f2d929f370a944489ae72b3ec44ac830a5daa27086e13233dfcae6d833fb6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33882 - 49870 "HINFO IN 6709490525337484940.909490159886806912. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.413011892s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-163446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-163446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=old-k8s-version-163446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_00_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-163446
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:01:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:01:25 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:01:25 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:01:25 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:01:25 +0000   Thu, 25 Dec 2025 19:01:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-163446
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0cc28420-dcfc-4f7d-abe6-5c56c5c91736
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-chdzr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-163446                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-krjfj                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-163446             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-163446    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-mxztf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-163446             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-163446 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-163446 event: Registered Node old-k8s-version-163446 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-163446 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [240f10ea60a6de0d3d031ab6db67c8c05ef519ddfb24bcca3ccea42ad3c1a2f8] <==
	{"level":"info","ts":"2025-12-25T19:00:49.500916Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-25T19:00:50.387157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-25T19:00:50.387212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-25T19:00:50.38723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-25T19:00:50.387244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:00:50.38725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-25T19:00:50.387269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-25T19:00:50.387279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-25T19:00:50.388267Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-163446 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:00:50.388378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:00:50.388357Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:00:50.388554Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:00:50.388605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:00:50.388635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:00:50.389167Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:00:50.389265Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:00:50.389874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:00:50.39069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-25T19:00:50.39127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-25T19:01:04.325022Z","caller":"traceutil/trace.go:171","msg":"trace[1496731825] transaction","detail":"{read_only:false; response_revision:269; number_of_response:1; }","duration":"136.655879ms","start":"2025-12-25T19:01:04.188345Z","end":"2025-12-25T19:01:04.325001Z","steps":["trace[1496731825] 'process raft request'  (duration: 136.51308ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T19:01:06.813711Z","caller":"traceutil/trace.go:171","msg":"trace[1368220054] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"115.285721ms","start":"2025-12-25T19:01:06.698405Z","end":"2025-12-25T19:01:06.813691Z","steps":["trace[1368220054] 'process raft request'  (duration: 107.303245ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T19:01:06.81371Z","caller":"traceutil/trace.go:171","msg":"trace[1802051864] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"115.132435ms","start":"2025-12-25T19:01:06.698568Z","end":"2025-12-25T19:01:06.8137Z","steps":["trace[1802051864] 'process raft request'  (duration: 115.06013ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T19:01:14.891633Z","caller":"traceutil/trace.go:171","msg":"trace[541044030] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"100.727124ms","start":"2025-12-25T19:01:14.790885Z","end":"2025-12-25T19:01:14.891612Z","steps":["trace[541044030] 'process raft request'  (duration: 100.556901ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T19:01:23.119359Z","caller":"traceutil/trace.go:171","msg":"trace[1373355841] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"137.329983ms","start":"2025-12-25T19:01:22.982007Z","end":"2025-12-25T19:01:23.119337Z","steps":["trace[1373355841] 'process raft request'  (duration: 120.498027ms)","trace[1373355841] 'compare'  (duration: 16.741503ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T19:01:23.811169Z","caller":"traceutil/trace.go:171","msg":"trace[1307372818] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"106.402184ms","start":"2025-12-25T19:01:23.704744Z","end":"2025-12-25T19:01:23.811146Z","steps":["trace[1307372818] 'process raft request'  (duration: 106.27296ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:01:34 up 44 min,  0 user,  load average: 4.43, 2.65, 1.77
	Linux old-k8s-version-163446 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6b0968eb8b6db2114c740e17a0f9565acece08be21537cd2fd0d2a9a29c71a37] <==
	I1225 19:01:09.566970       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:01:09.567285       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1225 19:01:09.567440       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:01:09.567460       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:01:09.567479       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:01:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:01:09.768816       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:01:09.769279       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:01:09.769550       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:01:09.769800       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:01:10.369789       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:01:10.369824       1 metrics.go:72] Registering metrics
	I1225 19:01:10.369888       1 controller.go:711] "Syncing nftables rules"
	I1225 19:01:19.770762       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:01:19.770882       1 main.go:301] handling current node
	I1225 19:01:29.772184       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:01:29.772240       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5fbf653b4250cd9b4fd461e493100d13f3a3ed75ee0900519345f50c8256b97d] <==
	I1225 19:00:51.535402       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1225 19:00:51.535428       1 aggregator.go:166] initial CRD sync complete...
	I1225 19:00:51.535435       1 autoregister_controller.go:141] Starting autoregister controller
	I1225 19:00:51.535441       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:00:51.535448       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:00:51.535466       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:00:51.535486       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1225 19:00:51.536422       1 controller.go:624] quota admission added evaluator for: namespaces
	E1225 19:00:51.547335       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1225 19:00:51.750163       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:00:52.439562       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1225 19:00:52.443028       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1225 19:00:52.443050       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:00:52.838681       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:00:52.874122       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:00:52.944489       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 19:00:52.950223       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1225 19:00:52.951325       1 controller.go:624] quota admission added evaluator for: endpoints
	I1225 19:00:52.956472       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:00:53.484623       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1225 19:00:54.693781       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1225 19:00:54.705043       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 19:00:54.715710       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1225 19:01:07.108068       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1225 19:01:07.160406       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f48296d3032825aee0b60d129ce1e0fff34a7378f9951924271b54f690d16c76] <==
	I1225 19:01:06.546732       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-163446" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1225 19:01:06.546957       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-163446" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1225 19:01:06.549705       1 shared_informer.go:318] Caches are synced for resource quota
	I1225 19:01:06.872587       1 shared_informer.go:318] Caches are synced for garbage collector
	I1225 19:01:06.882984       1 shared_informer.go:318] Caches are synced for garbage collector
	I1225 19:01:06.883017       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1225 19:01:07.121131       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mxztf"
	I1225 19:01:07.130629       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-krjfj"
	I1225 19:01:07.175447       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1225 19:01:07.361542       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-skcnv"
	I1225 19:01:07.388338       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-chdzr"
	I1225 19:01:07.401048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="219.198728ms"
	I1225 19:01:07.427440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.298976ms"
	I1225 19:01:07.428271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="306.502µs"
	I1225 19:01:07.555604       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1225 19:01:07.583226       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-skcnv"
	I1225 19:01:07.596588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.181632ms"
	I1225 19:01:07.619194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.742126ms"
	I1225 19:01:07.636244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.978279ms"
	I1225 19:01:07.636369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.199µs"
	I1225 19:01:20.187196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.373µs"
	I1225 19:01:20.209411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.367µs"
	I1225 19:01:20.884813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.807979ms"
	I1225 19:01:20.885034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.022µs"
	I1225 19:01:21.526925       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [72231efea1b70a6f63eed26c3391514fd6609098396f08ba05af917824f9561e] <==
	I1225 19:01:07.610574       1 server_others.go:69] "Using iptables proxy"
	I1225 19:01:07.627857       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1225 19:01:07.657336       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:01:07.660793       1 server_others.go:152] "Using iptables Proxier"
	I1225 19:01:07.661059       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1225 19:01:07.661075       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1225 19:01:07.661149       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 19:01:07.662002       1 server.go:846] "Version info" version="v1.28.0"
	I1225 19:01:07.662170       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:01:07.664131       1 config.go:97] "Starting endpoint slice config controller"
	I1225 19:01:07.664237       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 19:01:07.664329       1 config.go:188] "Starting service config controller"
	I1225 19:01:07.670499       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 19:01:07.670127       1 config.go:315] "Starting node config controller"
	I1225 19:01:07.670793       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 19:01:07.765389       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 19:01:07.771134       1 shared_informer.go:318] Caches are synced for node config
	I1225 19:01:07.771251       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [76eec300352d00b504cd3a9aecbfe8862440dc95b5efa2a0b6913d8e3052e7f9] <==
	W1225 19:00:51.499950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1225 19:00:51.499967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1225 19:00:51.500809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1225 19:00:51.500837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1225 19:00:51.500808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 19:00:51.500856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1225 19:00:51.501391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 19:00:51.501418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1225 19:00:52.316103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1225 19:00:52.316146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1225 19:00:52.335948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1225 19:00:52.335983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1225 19:00:52.424015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1225 19:00:52.424049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1225 19:00:52.470481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1225 19:00:52.470521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1225 19:00:52.566631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1225 19:00:52.566676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1225 19:00:52.620462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1225 19:00:52.620505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1225 19:00:52.708213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1225 19:00:52.708250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1225 19:00:52.715698       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1225 19:00:52.715738       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1225 19:00:55.594347       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 25 19:01:06 old-k8s-version-163446 kubelet[1405]: I1225 19:01:06.583859    1405 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 25 19:01:06 old-k8s-version-163446 kubelet[1405]: I1225 19:01:06.584687    1405 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.134545    1405 topology_manager.go:215] "Topology Admit Handler" podUID="ac805838-ff33-483a-8b56-db2598a7c377" podNamespace="kube-system" podName="kube-proxy-mxztf"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.144476    1405 topology_manager.go:215] "Topology Admit Handler" podUID="d8ae6ebb-54be-4b65-93b2-6fca9646477f" podNamespace="kube-system" podName="kindnet-krjfj"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.147387    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac805838-ff33-483a-8b56-db2598a7c377-kube-proxy\") pod \"kube-proxy-mxztf\" (UID: \"ac805838-ff33-483a-8b56-db2598a7c377\") " pod="kube-system/kube-proxy-mxztf"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.147451    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2zb\" (UniqueName: \"kubernetes.io/projected/ac805838-ff33-483a-8b56-db2598a7c377-kube-api-access-jh2zb\") pod \"kube-proxy-mxztf\" (UID: \"ac805838-ff33-483a-8b56-db2598a7c377\") " pod="kube-system/kube-proxy-mxztf"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.147485    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac805838-ff33-483a-8b56-db2598a7c377-xtables-lock\") pod \"kube-proxy-mxztf\" (UID: \"ac805838-ff33-483a-8b56-db2598a7c377\") " pod="kube-system/kube-proxy-mxztf"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.147515    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac805838-ff33-483a-8b56-db2598a7c377-lib-modules\") pod \"kube-proxy-mxztf\" (UID: \"ac805838-ff33-483a-8b56-db2598a7c377\") " pod="kube-system/kube-proxy-mxztf"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.248457    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8ae6ebb-54be-4b65-93b2-6fca9646477f-xtables-lock\") pod \"kindnet-krjfj\" (UID: \"d8ae6ebb-54be-4b65-93b2-6fca9646477f\") " pod="kube-system/kindnet-krjfj"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.248684    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8ae6ebb-54be-4b65-93b2-6fca9646477f-lib-modules\") pod \"kindnet-krjfj\" (UID: \"d8ae6ebb-54be-4b65-93b2-6fca9646477f\") " pod="kube-system/kindnet-krjfj"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.248878    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d8ae6ebb-54be-4b65-93b2-6fca9646477f-cni-cfg\") pod \"kindnet-krjfj\" (UID: \"d8ae6ebb-54be-4b65-93b2-6fca9646477f\") " pod="kube-system/kindnet-krjfj"
	Dec 25 19:01:07 old-k8s-version-163446 kubelet[1405]: I1225 19:01:07.249034    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzc8r\" (UniqueName: \"kubernetes.io/projected/d8ae6ebb-54be-4b65-93b2-6fca9646477f-kube-api-access-nzc8r\") pod \"kindnet-krjfj\" (UID: \"d8ae6ebb-54be-4b65-93b2-6fca9646477f\") " pod="kube-system/kindnet-krjfj"
	Dec 25 19:01:08 old-k8s-version-163446 kubelet[1405]: I1225 19:01:08.658232    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mxztf" podStartSLOduration=1.6581819439999999 podCreationTimestamp="2025-12-25 19:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:07.872571153 +0000 UTC m=+13.204721495" watchObservedRunningTime="2025-12-25 19:01:08.658181944 +0000 UTC m=+13.990332325"
	Dec 25 19:01:09 old-k8s-version-163446 kubelet[1405]: I1225 19:01:09.842406    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-krjfj" podStartSLOduration=1.058343279 podCreationTimestamp="2025-12-25 19:01:07 +0000 UTC" firstStartedPulling="2025-12-25 19:01:07.461502521 +0000 UTC m=+12.793652857" lastFinishedPulling="2025-12-25 19:01:09.245514759 +0000 UTC m=+14.577665084" observedRunningTime="2025-12-25 19:01:09.842206755 +0000 UTC m=+15.174357098" watchObservedRunningTime="2025-12-25 19:01:09.842355506 +0000 UTC m=+15.174505849"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.125415    1405 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.188010    1405 topology_manager.go:215] "Topology Admit Handler" podUID="e2ed39ee-6ff2-4de9-b2af-b355672afc97" podNamespace="kube-system" podName="coredns-5dd5756b68-chdzr"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.192597    1405 topology_manager.go:215] "Topology Admit Handler" podUID="937361bb-febe-4584-8f22-755d06866089" podNamespace="kube-system" podName="storage-provisioner"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.240857    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2ed39ee-6ff2-4de9-b2af-b355672afc97-config-volume\") pod \"coredns-5dd5756b68-chdzr\" (UID: \"e2ed39ee-6ff2-4de9-b2af-b355672afc97\") " pod="kube-system/coredns-5dd5756b68-chdzr"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.240968    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/937361bb-febe-4584-8f22-755d06866089-tmp\") pod \"storage-provisioner\" (UID: \"937361bb-febe-4584-8f22-755d06866089\") " pod="kube-system/storage-provisioner"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.241018    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm42b\" (UniqueName: \"kubernetes.io/projected/e2ed39ee-6ff2-4de9-b2af-b355672afc97-kube-api-access-lm42b\") pod \"coredns-5dd5756b68-chdzr\" (UID: \"e2ed39ee-6ff2-4de9-b2af-b355672afc97\") " pod="kube-system/coredns-5dd5756b68-chdzr"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.241048    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99rz\" (UniqueName: \"kubernetes.io/projected/937361bb-febe-4584-8f22-755d06866089-kube-api-access-w99rz\") pod \"storage-provisioner\" (UID: \"937361bb-febe-4584-8f22-755d06866089\") " pod="kube-system/storage-provisioner"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.867621    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.867574056 podCreationTimestamp="2025-12-25 19:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:20.867010972 +0000 UTC m=+26.199161316" watchObservedRunningTime="2025-12-25 19:01:20.867574056 +0000 UTC m=+26.199724398"
	Dec 25 19:01:20 old-k8s-version-163446 kubelet[1405]: I1225 19:01:20.877883    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-chdzr" podStartSLOduration=13.877836423 podCreationTimestamp="2025-12-25 19:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:20.877816778 +0000 UTC m=+26.209967121" watchObservedRunningTime="2025-12-25 19:01:20.877836423 +0000 UTC m=+26.209986766"
	Dec 25 19:01:23 old-k8s-version-163446 kubelet[1405]: I1225 19:01:23.818064    1405 topology_manager.go:215] "Topology Admit Handler" podUID="d7ba23a7-2bd3-4170-952b-a664e8b82355" podNamespace="default" podName="busybox"
	Dec 25 19:01:23 old-k8s-version-163446 kubelet[1405]: I1225 19:01:23.864360    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l222\" (UniqueName: \"kubernetes.io/projected/d7ba23a7-2bd3-4170-952b-a664e8b82355-kube-api-access-9l222\") pod \"busybox\" (UID: \"d7ba23a7-2bd3-4170-952b-a664e8b82355\") " pod="default/busybox"
	
	
	==> storage-provisioner [e7eecb004fae5bc8185816120d4e1b06254cd40704e60885c6afb82130a8cb6a] <==
	I1225 19:01:20.579977       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:01:20.593259       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:01:20.593325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 19:01:20.601216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:01:20.601323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f853f802-d45c-4cc9-a8ea-2b9b3cbed157", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-163446_5ec65cd6-f536-465e-8321-9445345e3367 became leader
	I1225 19:01:20.601425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-163446_5ec65cd6-f536-465e-8321-9445345e3367!
	I1225 19:01:20.701659       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-163446_5ec65cd6-f536-465e-8321-9445345e3367!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163446 -n old-k8s-version-163446
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-163446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.418322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-148352 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-148352 describe deploy/metrics-server -n kube-system: exit status 1 (57.680087ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-148352 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-148352
helpers_test.go:244: (dbg) docker inspect no-preload-148352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc",
	        "Created": "2025-12-25T19:01:06.66476254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266527,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:01:06.962803262Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/hosts",
	        "LogPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc-json.log",
	        "Name": "/no-preload-148352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-148352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-148352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc",
	                "LowerDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-148352",
	                "Source": "/var/lib/docker/volumes/no-preload-148352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-148352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-148352",
	                "name.minikube.sigs.k8s.io": "no-preload-148352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6fa535dac66c016e2aaf8f6bea5db5d9f5dab11f6a871ed7054d36e286dea6ca",
	            "SandboxKey": "/var/run/docker/netns/6fa535dac66c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-148352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fdcf6cdd30d0ba02321a77fbb55e094d77a371075d285e3dbc5b2c78f7f50f7",
	                    "EndpointID": "4cbcf3432398729355f7aeec0cfeece2f0b151945f8f013cd632cbb3754b4ef6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "9a:33:ea:e9:70:91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-148352",
	                        "41819bf1bd4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-148352 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-options-026286 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ ssh     │ cert-options-026286 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ ssh     │ -p cert-options-026286 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ delete  │ -p cert-options-026286                                                                                                                                                                                                                        │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ start   │ -p test-preload-632730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:59 UTC │
	│ image   │ test-preload-632730 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 18:59 UTC │
	│ stop    │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 18:59 UTC │
	│ start   │ -p test-preload-632730 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 19:00 UTC │
	│ image   │ test-preload-632730 image list                                                                                                                                                                                                                │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p stopped-upgrade-746190                                                                                                                                                                                                                     │ stopped-upgrade-746190    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ stop    │ -p kubernetes-upgrade-498224 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                 │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │                     │
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:01:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:01:51.675605  276130 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:01:51.675754  276130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:01:51.675766  276130 out.go:374] Setting ErrFile to fd 2...
	I1225 19:01:51.675773  276130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:01:51.676086  276130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:01:51.676847  276130 out.go:368] Setting JSON to false
	I1225 19:01:51.678335  276130 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2660,"bootTime":1766686652,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:01:51.678405  276130 start.go:143] virtualization: kvm guest
	I1225 19:01:51.680654  276130 out.go:179] * [old-k8s-version-163446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:01:51.682364  276130 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:01:51.682359  276130 notify.go:221] Checking for updates...
	I1225 19:01:51.684153  276130 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:01:51.687023  276130 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:51.688399  276130 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:01:51.690267  276130 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:01:51.692474  276130 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:01:51.694621  276130 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:51.696580  276130 out.go:179] * Kubernetes 1.34.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.3
	I1225 19:01:51.697719  276130 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:01:51.728544  276130 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:01:51.728646  276130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:01:51.800338  276130 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:01:51.787380411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:01:51.800482  276130 docker.go:319] overlay module found
	I1225 19:01:51.802225  276130 out.go:179] * Using the docker driver based on existing profile
	I1225 19:01:51.803290  276130 start.go:309] selected driver: docker
	I1225 19:01:51.803307  276130 start.go:928] validating driver "docker" against &{Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:51.803446  276130 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:01:51.804190  276130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:01:51.874638  276130 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:01:51.86245042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:01:51.875085  276130 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:01:51.875118  276130 cni.go:84] Creating CNI manager for ""
	I1225 19:01:51.875189  276130 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:01:51.875243  276130 start.go:353] cluster config:
	{Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:51.877033  276130 out.go:179] * Starting "old-k8s-version-163446" primary control-plane node in "old-k8s-version-163446" cluster
	I1225 19:01:51.878119  276130 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:01:51.879253  276130 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:01:51.880347  276130 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1225 19:01:51.880389  276130 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1225 19:01:51.880401  276130 cache.go:65] Caching tarball of preloaded images
	I1225 19:01:51.880419  276130 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:01:51.880482  276130 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:01:51.880497  276130 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1225 19:01:51.880633  276130 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/config.json ...
	I1225 19:01:51.902972  276130 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:01:51.902994  276130 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:01:51.903012  276130 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:01:51.903047  276130 start.go:360] acquireMachinesLock for old-k8s-version-163446: {Name:mk30fb3772624127c2ac3dfcbe1e2fab0a9ef77c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:01:51.903113  276130 start.go:364] duration metric: took 44.495µs to acquireMachinesLock for "old-k8s-version-163446"
	I1225 19:01:51.903135  276130 start.go:96] Skipping create...Using existing machine configuration
	I1225 19:01:51.903141  276130 fix.go:54] fixHost starting: 
	I1225 19:01:51.903429  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:51.923376  276130 fix.go:112] recreateIfNeeded on old-k8s-version-163446: state=Stopped err=<nil>
	W1225 19:01:51.923416  276130 fix.go:138] unexpected machine state, will restart: <nil>
	I1225 19:01:47.982821  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:01:51.232631  270844 addons.go:530] duration metric: took 504.27226ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1225 19:01:51.523694  270844 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-684693" context rescaled to 1 replicas
	W1225 19:01:53.023371  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	W1225 19:01:51.437852  265912 node_ready.go:57] node "no-preload-148352" has "Ready":"False" status (will retry)
	I1225 19:01:51.936453  265912 node_ready.go:49] node "no-preload-148352" is "Ready"
	I1225 19:01:51.936547  265912 node_ready.go:38] duration metric: took 12.004098982s for node "no-preload-148352" to be "Ready" ...
	I1225 19:01:51.936570  265912 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:01:51.936637  265912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:01:51.955163  265912 api_server.go:72] duration metric: took 12.348945573s to wait for apiserver process to appear ...
	I1225 19:01:51.955196  265912 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:01:51.955220  265912 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:01:51.962118  265912 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:01:51.963957  265912 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:01:51.963996  265912 api_server.go:131] duration metric: took 8.792203ms to wait for apiserver health ...
	I1225 19:01:51.964086  265912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:01:51.972188  265912 system_pods.go:59] 8 kube-system pods found
	I1225 19:01:51.972227  265912 system_pods.go:61] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending
	I1225 19:01:51.972240  265912 system_pods.go:61] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:51.972247  265912 system_pods.go:61] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:51.972257  265912 system_pods.go:61] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:51.972264  265912 system_pods.go:61] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:51.972271  265912 system_pods.go:61] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:51.972280  265912 system_pods.go:61] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:51.972290  265912 system_pods.go:61] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending
	I1225 19:01:51.972298  265912 system_pods.go:74] duration metric: took 8.204547ms to wait for pod list to return data ...
	I1225 19:01:51.972307  265912 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:01:51.976257  265912 default_sa.go:45] found service account: "default"
	I1225 19:01:51.976287  265912 default_sa.go:55] duration metric: took 3.972409ms for default service account to be created ...
	I1225 19:01:51.976298  265912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:01:51.980060  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:51.980094  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:51.980104  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:51.980113  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:51.980120  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:51.980126  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:51.980131  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:51.980139  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:51.980232  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending
	I1225 19:01:51.980275  265912 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1225 19:01:52.262705  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:52.262747  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:52.262757  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:52.262765  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:52.262771  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:52.262777  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:52.262783  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:52.262791  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:52.262801  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:01:52.602805  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:52.602846  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:52.602872  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:52.602884  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:52.602890  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:52.602931  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:52.602948  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:52.602958  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:52.602971  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:01:52.928509  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:52.928550  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:52.928557  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running
	I1225 19:01:52.928564  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:52.928568  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:52.928574  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:52.928579  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:52.928586  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:52.928594  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:01:53.518205  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:53.518237  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Running
	I1225 19:01:53.518245  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running
	I1225 19:01:53.518252  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:53.518257  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:53.518263  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:53.518268  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:53.518277  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:53.518286  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Running
	I1225 19:01:53.518298  265912 system_pods.go:126] duration metric: took 1.541992254s to wait for k8s-apps to be running ...
	I1225 19:01:53.518312  265912 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:01:53.518368  265912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:01:53.532105  265912 system_svc.go:56] duration metric: took 13.784132ms WaitForService to wait for kubelet
	I1225 19:01:53.532135  265912 kubeadm.go:587] duration metric: took 13.925923208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:01:53.532174  265912 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:01:53.534852  265912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:01:53.534883  265912 node_conditions.go:123] node cpu capacity is 8
	I1225 19:01:53.534911  265912 node_conditions.go:105] duration metric: took 2.72618ms to run NodePressure ...
	I1225 19:01:53.534921  265912 start.go:242] waiting for startup goroutines ...
	I1225 19:01:53.534928  265912 start.go:247] waiting for cluster config update ...
	I1225 19:01:53.534938  265912 start.go:256] writing updated cluster config ...
	I1225 19:01:53.535188  265912 ssh_runner.go:195] Run: rm -f paused
	I1225 19:01:53.539003  265912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:01:53.542592  265912 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.546466  265912 pod_ready.go:94] pod "coredns-7d764666f9-lqvms" is "Ready"
	I1225 19:01:53.546490  265912 pod_ready.go:86] duration metric: took 3.87525ms for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.548174  265912 pod_ready.go:83] waiting for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.551672  265912 pod_ready.go:94] pod "etcd-no-preload-148352" is "Ready"
	I1225 19:01:53.551690  265912 pod_ready.go:86] duration metric: took 3.499511ms for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.553250  265912 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.556527  265912 pod_ready.go:94] pod "kube-apiserver-no-preload-148352" is "Ready"
	I1225 19:01:53.556544  265912 pod_ready.go:86] duration metric: took 3.277149ms for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.558187  265912 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.943296  265912 pod_ready.go:94] pod "kube-controller-manager-no-preload-148352" is "Ready"
	I1225 19:01:53.943330  265912 pod_ready.go:86] duration metric: took 385.124131ms for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:54.143419  265912 pod_ready.go:83] waiting for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:54.543815  265912 pod_ready.go:94] pod "kube-proxy-j2p4x" is "Ready"
	I1225 19:01:54.543842  265912 pod_ready.go:86] duration metric: took 400.398926ms for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:54.744389  265912 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:55.143370  265912 pod_ready.go:94] pod "kube-scheduler-no-preload-148352" is "Ready"
	I1225 19:01:55.143397  265912 pod_ready.go:86] duration metric: took 398.984672ms for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:55.143409  265912 pod_ready.go:40] duration metric: took 1.604375839s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:01:55.186121  265912 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:01:55.187703  265912 out.go:179] * Done! kubectl is now configured to use "no-preload-148352" cluster and "default" namespace by default
	I1225 19:01:51.925295  276130 out.go:252] * Restarting existing docker container for "old-k8s-version-163446" ...
	I1225 19:01:51.925382  276130 cli_runner.go:164] Run: docker start old-k8s-version-163446
	I1225 19:01:52.230176  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:52.253239  276130 kic.go:430] container "old-k8s-version-163446" state is running.
	I1225 19:01:52.253707  276130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163446
	I1225 19:01:52.284107  276130 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/config.json ...
	I1225 19:01:52.284389  276130 machine.go:94] provisionDockerMachine start ...
	I1225 19:01:52.284493  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:52.309604  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:52.309911  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:52.309931  276130 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:01:52.310560  276130 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51420->127.0.0.1:33073: read: connection reset by peer
	I1225 19:01:55.438773  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163446
	
	I1225 19:01:55.438801  276130 ubuntu.go:182] provisioning hostname "old-k8s-version-163446"
	I1225 19:01:55.438858  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:55.456977  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:55.457218  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:55.457233  276130 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163446 && echo "old-k8s-version-163446" | sudo tee /etc/hostname
	I1225 19:01:55.589831  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163446
	
	I1225 19:01:55.589916  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:55.608474  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:55.608714  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:55.608740  276130 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:01:55.733669  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:01:55.733696  276130 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:01:55.733725  276130 ubuntu.go:190] setting up certificates
	I1225 19:01:55.733734  276130 provision.go:84] configureAuth start
	I1225 19:01:55.733796  276130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163446
	I1225 19:01:55.752370  276130 provision.go:143] copyHostCerts
	I1225 19:01:55.752450  276130 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:01:55.752469  276130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:01:55.752551  276130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:01:55.752677  276130 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:01:55.752690  276130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:01:55.752734  276130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:01:55.752835  276130 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:01:55.752844  276130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:01:55.752870  276130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:01:55.752973  276130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163446 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-163446]
	I1225 19:01:55.896803  276130 provision.go:177] copyRemoteCerts
	I1225 19:01:55.896864  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:01:55.896921  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:55.915054  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.008028  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:01:56.025380  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 19:01:56.042379  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 19:01:56.059197  276130 provision.go:87] duration metric: took 325.450567ms to configureAuth
	I1225 19:01:56.059235  276130 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:01:56.059435  276130 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:56.059547  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.079187  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:56.079459  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:56.079484  276130 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:01:56.376750  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:01:56.376778  276130 machine.go:97] duration metric: took 4.092368745s to provisionDockerMachine
	I1225 19:01:56.376792  276130 start.go:293] postStartSetup for "old-k8s-version-163446" (driver="docker")
	I1225 19:01:56.376806  276130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:01:56.376868  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:01:56.376931  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.396934  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.487179  276130 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:01:56.490768  276130 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:01:56.490791  276130 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:01:56.490802  276130 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:01:56.490846  276130 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:01:56.490965  276130 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:01:56.491060  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:01:56.498390  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:01:56.515486  276130 start.go:296] duration metric: took 138.680859ms for postStartSetup
	I1225 19:01:56.515553  276130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:01:56.515620  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.534756  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.623072  276130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:01:56.627508  276130 fix.go:56] duration metric: took 4.724362619s for fixHost
	I1225 19:01:56.627533  276130 start.go:83] releasing machines lock for "old-k8s-version-163446", held for 4.724407121s
	I1225 19:01:56.627585  276130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163446
	I1225 19:01:56.645589  276130 ssh_runner.go:195] Run: cat /version.json
	I1225 19:01:56.645642  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.645663  276130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:01:56.645731  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.664433  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.664731  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:52.983253  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:01:52.983321  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:01:56.804929  276130 ssh_runner.go:195] Run: systemctl --version
	I1225 19:01:56.811494  276130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:01:56.854732  276130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:01:56.859988  276130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:01:56.860089  276130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:01:56.869218  276130 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:01:56.869244  276130 start.go:496] detecting cgroup driver to use...
	I1225 19:01:56.869277  276130 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:01:56.869319  276130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:01:56.884649  276130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:01:56.900631  276130 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:01:56.900686  276130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:01:56.919281  276130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:01:56.934171  276130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:01:57.025095  276130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:01:57.113247  276130 docker.go:234] disabling docker service ...
	I1225 19:01:57.113306  276130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:01:57.128235  276130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:01:57.140313  276130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:01:57.219850  276130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:01:57.301227  276130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:01:57.314525  276130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:01:57.329026  276130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 19:01:57.329080  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.338028  276130 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:01:57.338093  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.346843  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.356103  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.364856  276130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:01:57.372700  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.381854  276130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.390451  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.399240  276130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:01:57.406650  276130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:01:57.414017  276130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:57.494696  276130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:01:57.641060  276130 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:01:57.641141  276130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:01:57.645008  276130 start.go:574] Will wait 60s for crictl version
	I1225 19:01:57.645062  276130 ssh_runner.go:195] Run: which crictl
	I1225 19:01:57.648469  276130 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:01:57.671908  276130 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:01:57.671998  276130 ssh_runner.go:195] Run: crio --version
	I1225 19:01:57.700010  276130 ssh_runner.go:195] Run: crio --version
	I1225 19:01:57.729201  276130 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1225 19:01:57.730354  276130 cli_runner.go:164] Run: docker network inspect old-k8s-version-163446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:01:57.749041  276130 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1225 19:01:57.753048  276130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:01:57.763306  276130 kubeadm.go:884] updating cluster {Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:01:57.763401  276130 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1225 19:01:57.763439  276130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:01:57.796309  276130 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:01:57.796334  276130 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:01:57.796395  276130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:01:57.821609  276130 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:01:57.821629  276130 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:01:57.821636  276130 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1225 19:01:57.821737  276130 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-163446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:01:57.821799  276130 ssh_runner.go:195] Run: crio config
	I1225 19:01:57.867365  276130 cni.go:84] Creating CNI manager for ""
	I1225 19:01:57.867387  276130 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:01:57.867403  276130 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:01:57.867423  276130 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163446 NodeName:old-k8s-version-163446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:01:57.867534  276130 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-163446"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:01:57.867595  276130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1225 19:01:57.875551  276130 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:01:57.875611  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:01:57.883470  276130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1225 19:01:57.896378  276130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:01:57.908663  276130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1225 19:01:57.921021  276130 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:01:57.924530  276130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:01:57.934133  276130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:58.019057  276130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:01:58.050346  276130 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446 for IP: 192.168.103.2
	I1225 19:01:58.050374  276130 certs.go:195] generating shared ca certs ...
	I1225 19:01:58.050396  276130 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.050552  276130 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:01:58.050620  276130 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:01:58.050634  276130 certs.go:257] generating profile certs ...
	I1225 19:01:58.050748  276130 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.key
	I1225 19:01:58.050813  276130 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/apiserver.key.29a1c18a
	I1225 19:01:58.050861  276130 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/proxy-client.key
	I1225 19:01:58.051057  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:01:58.051102  276130 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:01:58.051117  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:01:58.051154  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:01:58.051185  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:01:58.051226  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:01:58.051282  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:01:58.051832  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:01:58.071149  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:01:58.091332  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:01:58.111078  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:01:58.134281  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1225 19:01:58.154076  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:01:58.170605  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:01:58.186824  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:01:58.203459  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:01:58.224250  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:01:58.241941  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:01:58.259915  276130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:01:58.272375  276130 ssh_runner.go:195] Run: openssl version
	I1225 19:01:58.278637  276130 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.285624  276130 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:01:58.292922  276130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.296675  276130 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.296724  276130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.332307  276130 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:01:58.340040  276130 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.347224  276130 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:01:58.354992  276130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.358755  276130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.358809  276130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.396228  276130 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:01:58.404034  276130 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.411371  276130 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:01:58.418644  276130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.422208  276130 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.422256  276130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.456987  276130 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:01:58.464505  276130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:01:58.468129  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:01:58.502526  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:01:58.537269  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:01:58.578806  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:01:58.625301  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:01:58.676472  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:01:58.739386  276130 kubeadm.go:401] StartCluster: {Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:58.739503  276130 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:01:58.739557  276130 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:01:58.771437  276130 cri.go:96] found id: "b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4"
	I1225 19:01:58.771460  276130 cri.go:96] found id: "739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67"
	I1225 19:01:58.771466  276130 cri.go:96] found id: "c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552"
	I1225 19:01:58.771471  276130 cri.go:96] found id: "b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd"
	I1225 19:01:58.771483  276130 cri.go:96] found id: ""
	I1225 19:01:58.771533  276130 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:01:58.784098  276130 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:01:58Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:01:58.784176  276130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:01:58.792694  276130 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:01:58.792714  276130 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:01:58.792763  276130 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:01:58.800666  276130 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:01:58.801909  276130 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163446" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:58.802801  276130 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163446" cluster setting kubeconfig missing "old-k8s-version-163446" context setting]
	I1225 19:01:58.804088  276130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.806400  276130 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:01:58.816209  276130 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1225 19:01:58.816242  276130 kubeadm.go:602] duration metric: took 23.522265ms to restartPrimaryControlPlane
	I1225 19:01:58.816262  276130 kubeadm.go:403] duration metric: took 76.879587ms to StartCluster
	I1225 19:01:58.816280  276130 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.816350  276130 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:58.818733  276130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.819066  276130 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:01:58.819102  276130 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:01:58.819205  276130 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-163446"
	I1225 19:01:58.819226  276130 addons.go:70] Setting dashboard=true in profile "old-k8s-version-163446"
	I1225 19:01:58.819248  276130 addons.go:239] Setting addon dashboard=true in "old-k8s-version-163446"
	I1225 19:01:58.819244  276130 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-163446"
	W1225 19:01:58.819255  276130 addons.go:248] addon dashboard should already be in state true
	I1225 19:01:58.819258  276130 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:58.819281  276130 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-163446"
	I1225 19:01:58.819283  276130 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:01:58.819231  276130 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-163446"
	W1225 19:01:58.819385  276130 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:01:58.819408  276130 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:01:58.819654  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.819804  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.819817  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.822399  276130 out.go:179] * Verifying Kubernetes components...
	I1225 19:01:58.823700  276130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:58.847385  276130 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:01:58.847395  276130 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:01:58.848689  276130 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:01:58.848711  276130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:01:58.848768  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:58.849449  276130 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-163446"
	W1225 19:01:58.849471  276130 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:01:58.849501  276130 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:01:58.850031  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.853352  276130 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1225 19:01:55.024488  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	W1225 19:01:57.024614  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	W1225 19:01:59.025037  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	I1225 19:01:58.854434  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:01:58.854451  276130 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:01:58.854503  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:58.875399  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:58.887620  276130 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:01:58.887645  276130 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:01:58.887701  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:58.895891  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:58.916727  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:58.974466  276130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:01:58.987777  276130 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-163446" to be "Ready" ...
	I1225 19:01:58.988649  276130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:01:59.002525  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:01:59.002544  276130 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:01:59.020540  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:01:59.020566  276130 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:01:59.031044  276130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:01:59.037260  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:01:59.037287  276130 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:01:59.060073  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:01:59.060103  276130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:01:59.077038  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:01:59.077067  276130 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:01:59.092404  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:01:59.092431  276130 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:01:59.107083  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:01:59.107113  276130 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:01:59.120954  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:01:59.120993  276130 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:01:59.134740  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:01:59.134763  276130 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:01:59.148778  276130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:02:00.716200  276130 node_ready.go:49] node "old-k8s-version-163446" is "Ready"
	I1225 19:02:00.716233  276130 node_ready.go:38] duration metric: took 1.728421586s for node "old-k8s-version-163446" to be "Ready" ...
	I1225 19:02:00.716250  276130 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:02:00.716315  276130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:02:01.350669  276130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.361982667s)
	I1225 19:02:01.350737  276130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.31959987s)
	I1225 19:02:01.689309  276130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.540487412s)
	I1225 19:02:01.689376  276130 api_server.go:72] duration metric: took 2.870275259s to wait for apiserver process to appear ...
	I1225 19:02:01.689402  276130 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:02:01.689428  276130 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1225 19:02:01.691403  276130 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-163446 addons enable metrics-server
	
	I1225 19:02:01.692715  276130 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1225 19:01:57.983775  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:01:57.983834  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 25 19:01:52 no-preload-148352 crio[764]: time="2025-12-25T19:01:52.330100571Z" level=info msg="Starting container: ebbd825a3a0a5529d3cd17258ef17e36f12bae3797e0433bff30e5f3935d03e1" id=252b7e48-751d-4111-9ff8-81261b0265d5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:01:52 no-preload-148352 crio[764]: time="2025-12-25T19:01:52.333369165Z" level=info msg="Started container" PID=2779 containerID=ebbd825a3a0a5529d3cd17258ef17e36f12bae3797e0433bff30e5f3935d03e1 description=kube-system/coredns-7d764666f9-lqvms/coredns id=252b7e48-751d-4111-9ff8-81261b0265d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0378e2cda3eca7cc61e3588b9be243bb5f679cd6b6bc0ae53820cba9619aef8
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.662716851Z" level=info msg="Running pod sandbox: default/busybox/POD" id=052d507b-2444-4966-acb4-bf133dd5990c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.6627996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.668123089Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3dc20aca1566bf03660159b63e6155b5b6aedb72092c4b1e085fd39f950673e3 UID:cdb08b45-a83a-46fd-8df3-e2adf0b2917e NetNS:/var/run/netns/4fb328dc-9336-406b-82aa-20dd4f2e85d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005106c0}] Aliases:map[]}"
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.66816893Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.677852093Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3dc20aca1566bf03660159b63e6155b5b6aedb72092c4b1e085fd39f950673e3 UID:cdb08b45-a83a-46fd-8df3-e2adf0b2917e NetNS:/var/run/netns/4fb328dc-9336-406b-82aa-20dd4f2e85d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005106c0}] Aliases:map[]}"
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.678020225Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.67879321Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.680010306Z" level=info msg="Ran pod sandbox 3dc20aca1566bf03660159b63e6155b5b6aedb72092c4b1e085fd39f950673e3 with infra container: default/busybox/POD" id=052d507b-2444-4966-acb4-bf133dd5990c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.681324406Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e2d2cd21-7739-474b-92f1-258c357e31ab name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.681454767Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e2d2cd21-7739-474b-92f1-258c357e31ab name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.68149335Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e2d2cd21-7739-474b-92f1-258c357e31ab name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.682307109Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=85770491-2e5f-4586-9def-c14e1f268312 name=/runtime.v1.ImageService/PullImage
	Dec 25 19:01:55 no-preload-148352 crio[764]: time="2025-12-25T19:01:55.683703334Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.972136652Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=85770491-2e5f-4586-9def-c14e1f268312 name=/runtime.v1.ImageService/PullImage
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.972735162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9aea501-512f-43a9-9200-cc7d21a148fb name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.974646182Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=66b01bf6-ebbf-4620-aba6-299b1a938567 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.978200848Z" level=info msg="Creating container: default/busybox/busybox" id=fb50821e-2e9d-4bbe-a568-9036706db366 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.978312438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.982131069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:56 no-preload-148352 crio[764]: time="2025-12-25T19:01:56.982528197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:01:57 no-preload-148352 crio[764]: time="2025-12-25T19:01:57.014294745Z" level=info msg="Created container d09ea8b87c65750f1e29a18ae8ae70ba2f796daf2a7fa2945b08a54c6d76f379: default/busybox/busybox" id=fb50821e-2e9d-4bbe-a568-9036706db366 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:01:57 no-preload-148352 crio[764]: time="2025-12-25T19:01:57.014815107Z" level=info msg="Starting container: d09ea8b87c65750f1e29a18ae8ae70ba2f796daf2a7fa2945b08a54c6d76f379" id=909add1b-3aa1-4260-8216-4926847397cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:01:57 no-preload-148352 crio[764]: time="2025-12-25T19:01:57.016406905Z" level=info msg="Started container" PID=2855 containerID=d09ea8b87c65750f1e29a18ae8ae70ba2f796daf2a7fa2945b08a54c6d76f379 description=default/busybox/busybox id=909add1b-3aa1-4260-8216-4926847397cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dc20aca1566bf03660159b63e6155b5b6aedb72092c4b1e085fd39f950673e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d09ea8b87c657       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   3dc20aca1566b       busybox                                     default
	ebbd825a3a0a5       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   c0378e2cda3ec       coredns-7d764666f9-lqvms                    kube-system
	181ff55138bb7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   672a610aaaad5       storage-provisioner                         kube-system
	d19f757fdb1da       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   ecdd807d57f79       kindnet-jx25d                               kube-system
	c40fc52de7026       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      24 seconds ago      Running             kube-proxy                0                   c922528dc4ea1       kube-proxy-j2p4x                            kube-system
	9dfec1d9d8418       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      34 seconds ago      Running             kube-scheduler            0                   1a0ffa370c34b       kube-scheduler-no-preload-148352            kube-system
	0ca2ec00daa8f       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      34 seconds ago      Running             kube-apiserver            0                   5da1fff2f7240       kube-apiserver-no-preload-148352            kube-system
	7457b6ec85e52       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   d96e0b3dc9867       etcd-no-preload-148352                      kube-system
	94ccad8b641bb       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      34 seconds ago      Running             kube-controller-manager   0                   006cd65ebe018       kube-controller-manager-no-preload-148352   kube-system
	
	
	==> coredns [ebbd825a3a0a5529d3cd17258ef17e36f12bae3797e0433bff30e5f3935d03e1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60772 - 5 "HINFO IN 7588009905069427332.6692951677825517230. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030694607s
	
	
	==> describe nodes <==
	Name:               no-preload-148352
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-148352
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=no-preload-148352
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_01_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:01:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-148352
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:02:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:02:04 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:02:04 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:02:04 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:02:04 +0000   Thu, 25 Dec 2025 19:01:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-148352
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                de63609a-6f51-4a32-ad70-d0138650b5f8
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-lqvms                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-148352                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-jx25d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-148352             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-148352    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-j2p4x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-148352             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-148352 event: Registered Node no-preload-148352 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [7457b6ec85e52e3b186eed146de13bae650e4d089d82f965024253b497d90361] <==
	{"level":"info","ts":"2025-12-25T19:01:30.269700Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-25T19:01:30.560517Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-25T19:01:30.560592Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-25T19:01:30.560649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-25T19:01:30.560661Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:01:30.560675Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:30.561166Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:30.561208Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:01:30.561228Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:30.561242Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:30.561925Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-148352 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:01:30.561938Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:01:30.561969Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:01:30.562068Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-25T19:01:30.562213Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:01:30.562258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:01:30.562609Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-25T19:01:30.563339Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-25T19:01:30.563405Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-25T19:01:30.563433Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-25T19:01:30.563520Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:01:30.563584Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-25T19:01:30.564037Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:01:30.566763Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-25T19:01:30.567081Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 19:02:04 up 44 min,  0 user,  load average: 3.12, 2.50, 1.75
	Linux no-preload-148352 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d19f757fdb1da66a660b4d498bdfe7bb844a61747bbe7639f2f6da7d4929eaf8] <==
	I1225 19:01:41.433439       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:01:41.474370       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1225 19:01:41.474553       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:01:41.474578       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:01:41.474613       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:01:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:01:41.677051       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:01:41.677079       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:01:41.773188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:01:41.773384       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:01:42.173918       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:01:42.173957       1 metrics.go:72] Registering metrics
	I1225 19:01:42.174033       1 controller.go:711] "Syncing nftables rules"
	I1225 19:01:51.676032       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:01:51.676095       1 main.go:301] handling current node
	I1225 19:02:01.676117       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:02:01.676156       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ca2ec00daa8fba7e9c001b21316e5230d9ef2ce8332427d11f8a2feef6e19a3] <==
	I1225 19:01:31.719227       1 policy_source.go:248] refreshing policies
	E1225 19:01:31.725952       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1225 19:01:31.774016       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:01:31.778376       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1225 19:01:31.778448       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:31.785636       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:31.879250       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:01:32.576103       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1225 19:01:32.581387       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1225 19:01:32.581406       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1225 19:01:33.109949       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:01:33.154063       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:01:33.284813       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 19:01:33.291535       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1225 19:01:33.292644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:01:33.297569       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:01:33.606744       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:01:34.034500       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:01:34.043414       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 19:01:34.051655       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1225 19:01:39.059485       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:01:39.160677       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:39.164645       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:39.358550       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1225 19:02:03.440981       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:51416: use of closed network connection
	
	
	==> kube-controller-manager [94ccad8b641bb13d0c2c2f9fdb5c9534f5b91b1be7f6286fe8c36fc29975f83b] <==
	I1225 19:01:38.422662       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.422714       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-148352"
	I1225 19:01:38.422735       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.422763       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.422803       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.422806       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1225 19:01:38.422563       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.422725       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423013       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423028       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423080       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423230       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423718       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423719       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423719       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423720       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423728       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.423731       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.428946       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-148352" podCIDRs=["10.244.0.0/24"]
	I1225 19:01:38.429976       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.516467       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.522625       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:38.522639       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1225 19:01:38.522646       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1225 19:01:53.424758       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c40fc52de702640feb5c1c0e261536320a73d785f90059c4ab2f259199b7850e] <==
	I1225 19:01:39.868716       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:01:39.957622       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:01:40.058407       1 shared_informer.go:377] "Caches are synced"
	I1225 19:01:40.058450       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1225 19:01:40.058563       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:01:40.086069       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:01:40.086154       1 server_linux.go:136] "Using iptables Proxier"
	I1225 19:01:40.094592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:01:40.095052       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1225 19:01:40.095086       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:01:40.097924       1 config.go:200] "Starting service config controller"
	I1225 19:01:40.098023       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:01:40.098190       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:01:40.098229       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:01:40.099548       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:01:40.099628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:01:40.099639       1 config.go:309] "Starting node config controller"
	I1225 19:01:40.100471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:01:40.100934       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:01:40.199004       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:01:40.199093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:01:40.200519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9dfec1d9d8418255b9a087a1906b9826cc655d290abe22af00fd47c857091a53] <==
	E1225 19:01:31.629028       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1225 19:01:31.629042       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1225 19:01:31.629761       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1225 19:01:31.629792       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1225 19:01:31.629847       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1225 19:01:31.629869       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1225 19:01:31.629960       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1225 19:01:31.629991       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1225 19:01:31.630041       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1225 19:01:31.630144       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1225 19:01:31.630206       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1225 19:01:31.630207       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1225 19:01:31.630288       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1225 19:01:32.474288       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1225 19:01:32.495841       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1225 19:01:32.601394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1225 19:01:32.681212       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1225 19:01:32.702961       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1225 19:01:32.718138       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1225 19:01:32.782011       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1225 19:01:32.816331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1225 19:01:32.831885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1225 19:01:32.920517       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1225 19:01:33.114756       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1225 19:01:34.822928       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 25 19:01:39 no-preload-148352 kubelet[2175]: I1225 19:01:39.528938    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae9faca6-3e41-4e10-ae96-b7a397c3be75-xtables-lock\") pod \"kube-proxy-j2p4x\" (UID: \"ae9faca6-3e41-4e10-ae96-b7a397c3be75\") " pod="kube-system/kube-proxy-j2p4x"
	Dec 25 19:01:39 no-preload-148352 kubelet[2175]: I1225 19:01:39.529014    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjqc\" (UniqueName: \"kubernetes.io/projected/25f416b3-e74e-4d6e-9b1b-d4ddf07659c4-kube-api-access-jwjqc\") pod \"kindnet-jx25d\" (UID: \"25f416b3-e74e-4d6e-9b1b-d4ddf07659c4\") " pod="kube-system/kindnet-jx25d"
	Dec 25 19:01:39 no-preload-148352 kubelet[2175]: I1225 19:01:39.529054    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/25f416b3-e74e-4d6e-9b1b-d4ddf07659c4-cni-cfg\") pod \"kindnet-jx25d\" (UID: \"25f416b3-e74e-4d6e-9b1b-d4ddf07659c4\") " pod="kube-system/kindnet-jx25d"
	Dec 25 19:01:39 no-preload-148352 kubelet[2175]: E1225 19:01:39.862976    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-148352" containerName="kube-controller-manager"
	Dec 25 19:01:41 no-preload-148352 kubelet[2175]: I1225 19:01:41.959704    2175 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-j2p4x" podStartSLOduration=2.959685796 podStartE2EDuration="2.959685796s" podCreationTimestamp="2025-12-25 19:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:39.954090134 +0000 UTC m=+6.156326890" watchObservedRunningTime="2025-12-25 19:01:41.959685796 +0000 UTC m=+8.161922464"
	Dec 25 19:01:42 no-preload-148352 kubelet[2175]: E1225 19:01:42.862727    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-148352" containerName="etcd"
	Dec 25 19:01:42 no-preload-148352 kubelet[2175]: I1225 19:01:42.872732    2175 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-jx25d" podStartSLOduration=2.404174353 podStartE2EDuration="3.872718082s" podCreationTimestamp="2025-12-25 19:01:39 +0000 UTC" firstStartedPulling="2025-12-25 19:01:39.697819893 +0000 UTC m=+5.900056534" lastFinishedPulling="2025-12-25 19:01:41.166363628 +0000 UTC m=+7.368600263" observedRunningTime="2025-12-25 19:01:41.960470339 +0000 UTC m=+8.162706983" watchObservedRunningTime="2025-12-25 19:01:42.872718082 +0000 UTC m=+9.074954725"
	Dec 25 19:01:43 no-preload-148352 kubelet[2175]: E1225 19:01:43.858247    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-148352" containerName="kube-scheduler"
	Dec 25 19:01:45 no-preload-148352 kubelet[2175]: E1225 19:01:45.050532    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-148352" containerName="kube-apiserver"
	Dec 25 19:01:49 no-preload-148352 kubelet[2175]: E1225 19:01:49.867473    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-148352" containerName="kube-controller-manager"
	Dec 25 19:01:51 no-preload-148352 kubelet[2175]: I1225 19:01:51.925127    2175 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: I1225 19:01:52.020450    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4caa74a1-bb32-45a7-9cc3-d0af791be23e-tmp\") pod \"storage-provisioner\" (UID: \"4caa74a1-bb32-45a7-9cc3-d0af791be23e\") " pod="kube-system/storage-provisioner"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: I1225 19:01:52.020508    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn8xg\" (UniqueName: \"kubernetes.io/projected/4caa74a1-bb32-45a7-9cc3-d0af791be23e-kube-api-access-rn8xg\") pod \"storage-provisioner\" (UID: \"4caa74a1-bb32-45a7-9cc3-d0af791be23e\") " pod="kube-system/storage-provisioner"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: I1225 19:01:52.020582    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fc533e-6490-4d36-a61b-a754a22afd56-config-volume\") pod \"coredns-7d764666f9-lqvms\" (UID: \"87fc533e-6490-4d36-a61b-a754a22afd56\") " pod="kube-system/coredns-7d764666f9-lqvms"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: I1225 19:01:52.020671    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfzv\" (UniqueName: \"kubernetes.io/projected/87fc533e-6490-4d36-a61b-a754a22afd56-kube-api-access-bzfzv\") pod \"coredns-7d764666f9-lqvms\" (UID: \"87fc533e-6490-4d36-a61b-a754a22afd56\") " pod="kube-system/coredns-7d764666f9-lqvms"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: E1225 19:01:52.864370    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-148352" containerName="etcd"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: E1225 19:01:52.969860    2175 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lqvms" containerName="coredns"
	Dec 25 19:01:52 no-preload-148352 kubelet[2175]: I1225 19:01:52.991008    2175 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lqvms" podStartSLOduration=13.990983166 podStartE2EDuration="13.990983166s" podCreationTimestamp="2025-12-25 19:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:52.980391411 +0000 UTC m=+19.182628057" watchObservedRunningTime="2025-12-25 19:01:52.990983166 +0000 UTC m=+19.193219810"
	Dec 25 19:01:53 no-preload-148352 kubelet[2175]: I1225 19:01:53.001068    2175 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.001048887 podStartE2EDuration="13.001048887s" podCreationTimestamp="2025-12-25 19:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:53.000549068 +0000 UTC m=+19.202785743" watchObservedRunningTime="2025-12-25 19:01:53.001048887 +0000 UTC m=+19.203285531"
	Dec 25 19:01:53 no-preload-148352 kubelet[2175]: E1225 19:01:53.862329    2175 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-148352" containerName="kube-scheduler"
	Dec 25 19:01:53 no-preload-148352 kubelet[2175]: E1225 19:01:53.973642    2175 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lqvms" containerName="coredns"
	Dec 25 19:01:54 no-preload-148352 kubelet[2175]: E1225 19:01:54.975966    2175 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lqvms" containerName="coredns"
	Dec 25 19:01:55 no-preload-148352 kubelet[2175]: I1225 19:01:55.441720    2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjq4r\" (UniqueName: \"kubernetes.io/projected/cdb08b45-a83a-46fd-8df3-e2adf0b2917e-kube-api-access-hjq4r\") pod \"busybox\" (UID: \"cdb08b45-a83a-46fd-8df3-e2adf0b2917e\") " pod="default/busybox"
	Dec 25 19:01:57 no-preload-148352 kubelet[2175]: I1225 19:01:57.992800    2175 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.70089863 podStartE2EDuration="2.992782129s" podCreationTimestamp="2025-12-25 19:01:55 +0000 UTC" firstStartedPulling="2025-12-25 19:01:55.681853046 +0000 UTC m=+21.884089669" lastFinishedPulling="2025-12-25 19:01:56.973736529 +0000 UTC m=+23.175973168" observedRunningTime="2025-12-25 19:01:57.992718495 +0000 UTC m=+24.194955139" watchObservedRunningTime="2025-12-25 19:01:57.992782129 +0000 UTC m=+24.195018772"
	Dec 25 19:02:03 no-preload-148352 kubelet[2175]: E1225 19:02:03.440803    2175 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60800->127.0.0.1:39737: write tcp 127.0.0.1:60800->127.0.0.1:39737: write: broken pipe
	
	
	==> storage-provisioner [181ff55138bb705c49f397e5f541ba4461fcd56d446230da45f7574da1956349] <==
	I1225 19:01:52.343930       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:01:52.353044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:01:52.353132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:01:52.356078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:52.363824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:01:52.364083       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:01:52.364295       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-148352_c7981c91-d619-4a97-a487-27f966392fc7!
	I1225 19:01:52.364597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f3e1ed8-81d0-4039-80b9-a2f1ed9a1f41", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-148352_c7981c91-d619-4a97-a487-27f966392fc7 became leader
	W1225 19:01:52.367061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:52.371138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:01:52.464493       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-148352_c7981c91-d619-4a97-a487-27f966392fc7!
	W1225 19:01:54.374030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:54.378185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:56.381988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:56.386621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:58.390310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:01:58.395612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:00.398372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:00.402245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:02.405198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:02.409140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:04.412625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:04.416487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-148352 -n no-preload-148352
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-148352 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (236.122642ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-684693 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-684693 describe deploy/metrics-server -n kube-system: exit status 1 (66.429004ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-684693 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-684693
helpers_test.go:244: (dbg) docker inspect embed-certs-684693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca",
	        "Created": "2025-12-25T19:01:30.292736794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272013,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:01:30.330300483Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/hosts",
	        "LogPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca-json.log",
	        "Name": "/embed-certs-684693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-684693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-684693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca",
	                "LowerDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-684693",
	                "Source": "/var/lib/docker/volumes/embed-certs-684693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-684693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-684693",
	                "name.minikube.sigs.k8s.io": "embed-certs-684693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9b9058e0744194ede2c0704f87c869e9a0d7bb56af8d108659cd4f31619da149",
	            "SandboxKey": "/var/run/docker/netns/9b9058e07441",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-684693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5ae0820826f166ee69d26403125a109290c4a58c28c34d1ba9a229995b23eef",
	                    "EndpointID": "a40a7117c1b8c3d837953299a103ba7c2dfa02f9dafa8eeb4d0c791f8024d518",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0a:c7:37:76:e3:3f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-684693",
	                        "6098c312c5a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-684693 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-684693 logs -n 25: (1.21115802s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cert-options-026286 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ delete  │ -p cert-options-026286                                                                                                                                                                                                                        │ cert-options-026286       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:58 UTC │
	│ start   │ -p test-preload-632730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                                                  │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:58 UTC │ 25 Dec 25 18:59 UTC │
	│ image   │ test-preload-632730 image pull ghcr.io/medyagh/image-mirrors/busybox:latest                                                                                                                                                                   │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 18:59 UTC │
	│ stop    │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 18:59 UTC │
	│ start   │ -p test-preload-632730 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                                            │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 18:59 UTC │ 25 Dec 25 19:00 UTC │
	│ image   │ test-preload-632730 image list                                                                                                                                                                                                                │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p stopped-upgrade-746190                                                                                                                                                                                                                     │ stopped-upgrade-746190    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ stop    │ -p kubernetes-upgrade-498224 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                 │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │                     │
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:01:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:01:51.675605  276130 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:01:51.675754  276130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:01:51.675766  276130 out.go:374] Setting ErrFile to fd 2...
	I1225 19:01:51.675773  276130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:01:51.676086  276130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:01:51.676847  276130 out.go:368] Setting JSON to false
	I1225 19:01:51.678335  276130 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2660,"bootTime":1766686652,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:01:51.678405  276130 start.go:143] virtualization: kvm guest
	I1225 19:01:51.680654  276130 out.go:179] * [old-k8s-version-163446] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:01:51.682364  276130 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:01:51.682359  276130 notify.go:221] Checking for updates...
	I1225 19:01:51.684153  276130 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:01:51.687023  276130 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:51.688399  276130 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:01:51.690267  276130 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:01:51.692474  276130 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:01:51.694621  276130 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:51.696580  276130 out.go:179] * Kubernetes 1.34.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.3
	I1225 19:01:51.697719  276130 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:01:51.728544  276130 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:01:51.728646  276130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:01:51.800338  276130 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:01:51.787380411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:01:51.800482  276130 docker.go:319] overlay module found
	I1225 19:01:51.802225  276130 out.go:179] * Using the docker driver based on existing profile
	I1225 19:01:51.803290  276130 start.go:309] selected driver: docker
	I1225 19:01:51.803307  276130 start.go:928] validating driver "docker" against &{Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:51.803446  276130 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:01:51.804190  276130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:01:51.874638  276130 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:01:51.86245042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:01:51.875085  276130 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:01:51.875118  276130 cni.go:84] Creating CNI manager for ""
	I1225 19:01:51.875189  276130 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:01:51.875243  276130 start.go:353] cluster config:
	{Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:51.877033  276130 out.go:179] * Starting "old-k8s-version-163446" primary control-plane node in "old-k8s-version-163446" cluster
	I1225 19:01:51.878119  276130 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:01:51.879253  276130 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:01:51.880347  276130 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1225 19:01:51.880389  276130 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1225 19:01:51.880401  276130 cache.go:65] Caching tarball of preloaded images
	I1225 19:01:51.880419  276130 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:01:51.880482  276130 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:01:51.880497  276130 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1225 19:01:51.880633  276130 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/config.json ...
	I1225 19:01:51.902972  276130 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:01:51.902994  276130 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:01:51.903012  276130 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:01:51.903047  276130 start.go:360] acquireMachinesLock for old-k8s-version-163446: {Name:mk30fb3772624127c2ac3dfcbe1e2fab0a9ef77c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:01:51.903113  276130 start.go:364] duration metric: took 44.495µs to acquireMachinesLock for "old-k8s-version-163446"
	I1225 19:01:51.903135  276130 start.go:96] Skipping create...Using existing machine configuration
	I1225 19:01:51.903141  276130 fix.go:54] fixHost starting: 
	I1225 19:01:51.903429  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:51.923376  276130 fix.go:112] recreateIfNeeded on old-k8s-version-163446: state=Stopped err=<nil>
	W1225 19:01:51.923416  276130 fix.go:138] unexpected machine state, will restart: <nil>
	I1225 19:01:47.982821  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:01:51.232631  270844 addons.go:530] duration metric: took 504.27226ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1225 19:01:51.523694  270844 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-684693" context rescaled to 1 replicas
	W1225 19:01:53.023371  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	W1225 19:01:51.437852  265912 node_ready.go:57] node "no-preload-148352" has "Ready":"False" status (will retry)
	I1225 19:01:51.936453  265912 node_ready.go:49] node "no-preload-148352" is "Ready"
	I1225 19:01:51.936547  265912 node_ready.go:38] duration metric: took 12.004098982s for node "no-preload-148352" to be "Ready" ...
	I1225 19:01:51.936570  265912 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:01:51.936637  265912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:01:51.955163  265912 api_server.go:72] duration metric: took 12.348945573s to wait for apiserver process to appear ...
	I1225 19:01:51.955196  265912 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:01:51.955220  265912 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:01:51.962118  265912 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:01:51.963957  265912 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:01:51.963996  265912 api_server.go:131] duration metric: took 8.792203ms to wait for apiserver health ...
	I1225 19:01:51.964086  265912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:01:51.972188  265912 system_pods.go:59] 8 kube-system pods found
	I1225 19:01:51.972227  265912 system_pods.go:61] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending
	I1225 19:01:51.972240  265912 system_pods.go:61] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:51.972247  265912 system_pods.go:61] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:51.972257  265912 system_pods.go:61] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:51.972264  265912 system_pods.go:61] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:51.972271  265912 system_pods.go:61] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:51.972280  265912 system_pods.go:61] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:51.972290  265912 system_pods.go:61] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending
	I1225 19:01:51.972298  265912 system_pods.go:74] duration metric: took 8.204547ms to wait for pod list to return data ...
	I1225 19:01:51.972307  265912 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:01:51.976257  265912 default_sa.go:45] found service account: "default"
	I1225 19:01:51.976287  265912 default_sa.go:55] duration metric: took 3.972409ms for default service account to be created ...
	I1225 19:01:51.976298  265912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:01:51.980060  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:51.980094  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:51.980104  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:51.980113  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:51.980120  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:51.980126  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:51.980131  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:51.980139  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:51.980232  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending
	I1225 19:01:51.980275  265912 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1225 19:01:52.262705  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:52.262747  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:52.262757  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:52.262765  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:52.262771  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:52.262777  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:52.262783  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:52.262791  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:52.262801  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:01:52.602805  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:52.602846  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:52.602872  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:01:52.602884  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:52.602890  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:52.602931  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:52.602948  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:52.602958  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:52.602971  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:01:52.928509  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:52.928550  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:01:52.928557  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running
	I1225 19:01:52.928564  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:52.928568  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:52.928574  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:52.928579  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:52.928586  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:52.928594  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:01:53.518205  265912 system_pods.go:86] 8 kube-system pods found
	I1225 19:01:53.518237  265912 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Running
	I1225 19:01:53.518245  265912 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running
	I1225 19:01:53.518252  265912 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running
	I1225 19:01:53.518257  265912 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running
	I1225 19:01:53.518263  265912 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running
	I1225 19:01:53.518268  265912 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running
	I1225 19:01:53.518277  265912 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:01:53.518286  265912 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Running
	I1225 19:01:53.518298  265912 system_pods.go:126] duration metric: took 1.541992254s to wait for k8s-apps to be running ...
	I1225 19:01:53.518312  265912 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:01:53.518368  265912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:01:53.532105  265912 system_svc.go:56] duration metric: took 13.784132ms WaitForService to wait for kubelet
	I1225 19:01:53.532135  265912 kubeadm.go:587] duration metric: took 13.925923208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:01:53.532174  265912 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:01:53.534852  265912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:01:53.534883  265912 node_conditions.go:123] node cpu capacity is 8
	I1225 19:01:53.534911  265912 node_conditions.go:105] duration metric: took 2.72618ms to run NodePressure ...
	I1225 19:01:53.534921  265912 start.go:242] waiting for startup goroutines ...
	I1225 19:01:53.534928  265912 start.go:247] waiting for cluster config update ...
	I1225 19:01:53.534938  265912 start.go:256] writing updated cluster config ...
	I1225 19:01:53.535188  265912 ssh_runner.go:195] Run: rm -f paused
	I1225 19:01:53.539003  265912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:01:53.542592  265912 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.546466  265912 pod_ready.go:94] pod "coredns-7d764666f9-lqvms" is "Ready"
	I1225 19:01:53.546490  265912 pod_ready.go:86] duration metric: took 3.87525ms for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.548174  265912 pod_ready.go:83] waiting for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.551672  265912 pod_ready.go:94] pod "etcd-no-preload-148352" is "Ready"
	I1225 19:01:53.551690  265912 pod_ready.go:86] duration metric: took 3.499511ms for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.553250  265912 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.556527  265912 pod_ready.go:94] pod "kube-apiserver-no-preload-148352" is "Ready"
	I1225 19:01:53.556544  265912 pod_ready.go:86] duration metric: took 3.277149ms for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.558187  265912 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:53.943296  265912 pod_ready.go:94] pod "kube-controller-manager-no-preload-148352" is "Ready"
	I1225 19:01:53.943330  265912 pod_ready.go:86] duration metric: took 385.124131ms for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:54.143419  265912 pod_ready.go:83] waiting for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:54.543815  265912 pod_ready.go:94] pod "kube-proxy-j2p4x" is "Ready"
	I1225 19:01:54.543842  265912 pod_ready.go:86] duration metric: took 400.398926ms for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:54.744389  265912 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:55.143370  265912 pod_ready.go:94] pod "kube-scheduler-no-preload-148352" is "Ready"
	I1225 19:01:55.143397  265912 pod_ready.go:86] duration metric: took 398.984672ms for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:01:55.143409  265912 pod_ready.go:40] duration metric: took 1.604375839s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:01:55.186121  265912 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:01:55.187703  265912 out.go:179] * Done! kubectl is now configured to use "no-preload-148352" cluster and "default" namespace by default
	I1225 19:01:51.925295  276130 out.go:252] * Restarting existing docker container for "old-k8s-version-163446" ...
	I1225 19:01:51.925382  276130 cli_runner.go:164] Run: docker start old-k8s-version-163446
	I1225 19:01:52.230176  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:52.253239  276130 kic.go:430] container "old-k8s-version-163446" state is running.
	I1225 19:01:52.253707  276130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163446
	I1225 19:01:52.284107  276130 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/config.json ...
	I1225 19:01:52.284389  276130 machine.go:94] provisionDockerMachine start ...
	I1225 19:01:52.284493  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:52.309604  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:52.309911  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:52.309931  276130 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:01:52.310560  276130 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51420->127.0.0.1:33073: read: connection reset by peer
	I1225 19:01:55.438773  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163446
	
	I1225 19:01:55.438801  276130 ubuntu.go:182] provisioning hostname "old-k8s-version-163446"
	I1225 19:01:55.438858  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:55.456977  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:55.457218  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:55.457233  276130 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163446 && echo "old-k8s-version-163446" | sudo tee /etc/hostname
	I1225 19:01:55.589831  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163446
	
	I1225 19:01:55.589916  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:55.608474  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:55.608714  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:55.608740  276130 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:01:55.733669  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:01:55.733696  276130 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:01:55.733725  276130 ubuntu.go:190] setting up certificates
	I1225 19:01:55.733734  276130 provision.go:84] configureAuth start
	I1225 19:01:55.733796  276130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163446
	I1225 19:01:55.752370  276130 provision.go:143] copyHostCerts
	I1225 19:01:55.752450  276130 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:01:55.752469  276130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:01:55.752551  276130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:01:55.752677  276130 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:01:55.752690  276130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:01:55.752734  276130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:01:55.752835  276130 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:01:55.752844  276130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:01:55.752870  276130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:01:55.752973  276130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163446 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-163446]
	I1225 19:01:55.896803  276130 provision.go:177] copyRemoteCerts
	I1225 19:01:55.896864  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:01:55.896921  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:55.915054  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.008028  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:01:56.025380  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1225 19:01:56.042379  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 19:01:56.059197  276130 provision.go:87] duration metric: took 325.450567ms to configureAuth
	I1225 19:01:56.059235  276130 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:01:56.059435  276130 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:56.059547  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.079187  276130 main.go:144] libmachine: Using SSH client type: native
	I1225 19:01:56.079459  276130 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1225 19:01:56.079484  276130 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:01:56.376750  276130 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:01:56.376778  276130 machine.go:97] duration metric: took 4.092368745s to provisionDockerMachine
	I1225 19:01:56.376792  276130 start.go:293] postStartSetup for "old-k8s-version-163446" (driver="docker")
	I1225 19:01:56.376806  276130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:01:56.376868  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:01:56.376931  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.396934  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.487179  276130 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:01:56.490768  276130 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:01:56.490791  276130 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:01:56.490802  276130 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:01:56.490846  276130 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:01:56.490965  276130 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:01:56.491060  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:01:56.498390  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:01:56.515486  276130 start.go:296] duration metric: took 138.680859ms for postStartSetup
	I1225 19:01:56.515553  276130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:01:56.515620  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.534756  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.623072  276130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:01:56.627508  276130 fix.go:56] duration metric: took 4.724362619s for fixHost
	I1225 19:01:56.627533  276130 start.go:83] releasing machines lock for "old-k8s-version-163446", held for 4.724407121s
	I1225 19:01:56.627585  276130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163446
	I1225 19:01:56.645589  276130 ssh_runner.go:195] Run: cat /version.json
	I1225 19:01:56.645642  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.645663  276130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:01:56.645731  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:56.664433  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:56.664731  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:52.983253  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:01:52.983321  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:01:56.804929  276130 ssh_runner.go:195] Run: systemctl --version
	I1225 19:01:56.811494  276130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:01:56.854732  276130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:01:56.859988  276130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:01:56.860089  276130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:01:56.869218  276130 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:01:56.869244  276130 start.go:496] detecting cgroup driver to use...
	I1225 19:01:56.869277  276130 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:01:56.869319  276130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:01:56.884649  276130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:01:56.900631  276130 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:01:56.900686  276130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:01:56.919281  276130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:01:56.934171  276130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:01:57.025095  276130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:01:57.113247  276130 docker.go:234] disabling docker service ...
	I1225 19:01:57.113306  276130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:01:57.128235  276130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:01:57.140313  276130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:01:57.219850  276130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:01:57.301227  276130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:01:57.314525  276130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:01:57.329026  276130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1225 19:01:57.329080  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.338028  276130 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:01:57.338093  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.346843  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.356103  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.364856  276130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:01:57.372700  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.381854  276130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.390451  276130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:01:57.399240  276130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:01:57.406650  276130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:01:57.414017  276130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:57.494696  276130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:01:57.641060  276130 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:01:57.641141  276130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:01:57.645008  276130 start.go:574] Will wait 60s for crictl version
	I1225 19:01:57.645062  276130 ssh_runner.go:195] Run: which crictl
	I1225 19:01:57.648469  276130 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:01:57.671908  276130 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:01:57.671998  276130 ssh_runner.go:195] Run: crio --version
	I1225 19:01:57.700010  276130 ssh_runner.go:195] Run: crio --version
	I1225 19:01:57.729201  276130 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1225 19:01:57.730354  276130 cli_runner.go:164] Run: docker network inspect old-k8s-version-163446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:01:57.749041  276130 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1225 19:01:57.753048  276130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:01:57.763306  276130 kubeadm.go:884] updating cluster {Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:01:57.763401  276130 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1225 19:01:57.763439  276130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:01:57.796309  276130 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:01:57.796334  276130 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:01:57.796395  276130 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:01:57.821609  276130 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:01:57.821629  276130 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:01:57.821636  276130 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1225 19:01:57.821737  276130 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-163446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:01:57.821799  276130 ssh_runner.go:195] Run: crio config
	I1225 19:01:57.867365  276130 cni.go:84] Creating CNI manager for ""
	I1225 19:01:57.867387  276130 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:01:57.867403  276130 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:01:57.867423  276130 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163446 NodeName:old-k8s-version-163446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:01:57.867534  276130 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-163446"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:01:57.867595  276130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1225 19:01:57.875551  276130 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:01:57.875611  276130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:01:57.883470  276130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1225 19:01:57.896378  276130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:01:57.908663  276130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1225 19:01:57.921021  276130 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:01:57.924530  276130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:01:57.934133  276130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:58.019057  276130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:01:58.050346  276130 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446 for IP: 192.168.103.2
	I1225 19:01:58.050374  276130 certs.go:195] generating shared ca certs ...
	I1225 19:01:58.050396  276130 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.050552  276130 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:01:58.050620  276130 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:01:58.050634  276130 certs.go:257] generating profile certs ...
	I1225 19:01:58.050748  276130 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.key
	I1225 19:01:58.050813  276130 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/apiserver.key.29a1c18a
	I1225 19:01:58.050861  276130 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/proxy-client.key
	I1225 19:01:58.051057  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:01:58.051102  276130 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:01:58.051117  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:01:58.051154  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:01:58.051185  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:01:58.051226  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:01:58.051282  276130 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:01:58.051832  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:01:58.071149  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:01:58.091332  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:01:58.111078  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:01:58.134281  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1225 19:01:58.154076  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:01:58.170605  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:01:58.186824  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:01:58.203459  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:01:58.224250  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:01:58.241941  276130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:01:58.259915  276130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:01:58.272375  276130 ssh_runner.go:195] Run: openssl version
	I1225 19:01:58.278637  276130 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.285624  276130 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:01:58.292922  276130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.296675  276130 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.296724  276130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:01:58.332307  276130 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:01:58.340040  276130 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.347224  276130 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:01:58.354992  276130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.358755  276130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.358809  276130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:01:58.396228  276130 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:01:58.404034  276130 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.411371  276130 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:01:58.418644  276130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.422208  276130 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.422256  276130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:01:58.456987  276130 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:01:58.464505  276130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:01:58.468129  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:01:58.502526  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:01:58.537269  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:01:58.578806  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:01:58.625301  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:01:58.676472  276130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:01:58.739386  276130 kubeadm.go:401] StartCluster: {Name:old-k8s-version-163446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-163446 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:01:58.739503  276130 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:01:58.739557  276130 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:01:58.771437  276130 cri.go:96] found id: "b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4"
	I1225 19:01:58.771460  276130 cri.go:96] found id: "739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67"
	I1225 19:01:58.771466  276130 cri.go:96] found id: "c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552"
	I1225 19:01:58.771471  276130 cri.go:96] found id: "b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd"
	I1225 19:01:58.771483  276130 cri.go:96] found id: ""
	I1225 19:01:58.771533  276130 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:01:58.784098  276130 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:01:58Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:01:58.784176  276130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:01:58.792694  276130 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:01:58.792714  276130 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:01:58.792763  276130 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:01:58.800666  276130 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:01:58.801909  276130 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163446" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:58.802801  276130 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163446" cluster setting kubeconfig missing "old-k8s-version-163446" context setting]
	I1225 19:01:58.804088  276130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.806400  276130 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:01:58.816209  276130 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1225 19:01:58.816242  276130 kubeadm.go:602] duration metric: took 23.522265ms to restartPrimaryControlPlane
	I1225 19:01:58.816262  276130 kubeadm.go:403] duration metric: took 76.879587ms to StartCluster
	I1225 19:01:58.816280  276130 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.816350  276130 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:01:58.818733  276130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:01:58.819066  276130 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:01:58.819102  276130 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:01:58.819205  276130 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-163446"
	I1225 19:01:58.819226  276130 addons.go:70] Setting dashboard=true in profile "old-k8s-version-163446"
	I1225 19:01:58.819248  276130 addons.go:239] Setting addon dashboard=true in "old-k8s-version-163446"
	I1225 19:01:58.819244  276130 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-163446"
	W1225 19:01:58.819255  276130 addons.go:248] addon dashboard should already be in state true
	I1225 19:01:58.819258  276130 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:01:58.819281  276130 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-163446"
	I1225 19:01:58.819283  276130 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:01:58.819231  276130 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-163446"
	W1225 19:01:58.819385  276130 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:01:58.819408  276130 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:01:58.819654  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.819804  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.819817  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.822399  276130 out.go:179] * Verifying Kubernetes components...
	I1225 19:01:58.823700  276130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:01:58.847385  276130 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:01:58.847395  276130 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:01:58.848689  276130 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:01:58.848711  276130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:01:58.848768  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:58.849449  276130 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-163446"
	W1225 19:01:58.849471  276130 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:01:58.849501  276130 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:01:58.850031  276130 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:01:58.853352  276130 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1225 19:01:55.024488  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	W1225 19:01:57.024614  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	W1225 19:01:59.025037  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	I1225 19:01:58.854434  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:01:58.854451  276130 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:01:58.854503  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:58.875399  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:58.887620  276130 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:01:58.887645  276130 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:01:58.887701  276130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:01:58.895891  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:58.916727  276130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:01:58.974466  276130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:01:58.987777  276130 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-163446" to be "Ready" ...
	I1225 19:01:58.988649  276130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:01:59.002525  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:01:59.002544  276130 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:01:59.020540  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:01:59.020566  276130 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:01:59.031044  276130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:01:59.037260  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:01:59.037287  276130 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:01:59.060073  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:01:59.060103  276130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:01:59.077038  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:01:59.077067  276130 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:01:59.092404  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:01:59.092431  276130 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:01:59.107083  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:01:59.107113  276130 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:01:59.120954  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:01:59.120993  276130 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:01:59.134740  276130 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:01:59.134763  276130 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:01:59.148778  276130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:02:00.716200  276130 node_ready.go:49] node "old-k8s-version-163446" is "Ready"
	I1225 19:02:00.716233  276130 node_ready.go:38] duration metric: took 1.728421586s for node "old-k8s-version-163446" to be "Ready" ...
	I1225 19:02:00.716250  276130 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:02:00.716315  276130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:02:01.350669  276130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.361982667s)
	I1225 19:02:01.350737  276130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.31959987s)
	I1225 19:02:01.689309  276130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.540487412s)
	I1225 19:02:01.689376  276130 api_server.go:72] duration metric: took 2.870275259s to wait for apiserver process to appear ...
	I1225 19:02:01.689402  276130 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:02:01.689428  276130 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1225 19:02:01.691403  276130 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-163446 addons enable metrics-server
	
	I1225 19:02:01.692715  276130 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1225 19:01:57.983775  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:01:57.983834  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1225 19:02:01.524107  270844 node_ready.go:57] node "embed-certs-684693" has "Ready":"False" status (will retry)
	I1225 19:02:03.523753  270844 node_ready.go:49] node "embed-certs-684693" is "Ready"
	I1225 19:02:03.523782  270844 node_ready.go:38] duration metric: took 12.503070428s for node "embed-certs-684693" to be "Ready" ...
	I1225 19:02:03.523798  270844 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:02:03.523849  270844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:02:03.539790  270844 api_server.go:72] duration metric: took 12.811449551s to wait for apiserver process to appear ...
	I1225 19:02:03.539817  270844 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:02:03.539839  270844 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:03.546068  270844 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1225 19:02:03.547254  270844 api_server.go:141] control plane version: v1.34.3
	I1225 19:02:03.547282  270844 api_server.go:131] duration metric: took 7.45655ms to wait for apiserver health ...
	I1225 19:02:03.547294  270844 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:02:03.553030  270844 system_pods.go:59] 8 kube-system pods found
	I1225 19:02:03.553109  270844 system_pods.go:61] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:03.553120  270844 system_pods.go:61] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running
	I1225 19:02:03.553128  270844 system_pods.go:61] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:03.553134  270844 system_pods.go:61] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running
	I1225 19:02:03.553148  270844 system_pods.go:61] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running
	I1225 19:02:03.553152  270844 system_pods.go:61] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:03.553157  270844 system_pods.go:61] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running
	I1225 19:02:03.553165  270844 system_pods.go:61] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:03.553172  270844 system_pods.go:74] duration metric: took 5.872045ms to wait for pod list to return data ...
	I1225 19:02:03.553186  270844 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:02:03.557049  270844 default_sa.go:45] found service account: "default"
	I1225 19:02:03.557120  270844 default_sa.go:55] duration metric: took 3.926711ms for default service account to be created ...
	I1225 19:02:03.557134  270844 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:02:03.651818  270844 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:03.651845  270844 system_pods.go:89] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:03.651851  270844 system_pods.go:89] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running
	I1225 19:02:03.651858  270844 system_pods.go:89] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:03.651862  270844 system_pods.go:89] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running
	I1225 19:02:03.651867  270844 system_pods.go:89] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running
	I1225 19:02:03.651871  270844 system_pods.go:89] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:03.651875  270844 system_pods.go:89] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running
	I1225 19:02:03.651879  270844 system_pods.go:89] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:03.651930  270844 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1225 19:02:03.857512  270844 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:03.857548  270844 system_pods.go:89] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:03.857557  270844 system_pods.go:89] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running
	I1225 19:02:03.857566  270844 system_pods.go:89] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:03.857572  270844 system_pods.go:89] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running
	I1225 19:02:03.857579  270844 system_pods.go:89] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running
	I1225 19:02:03.857586  270844 system_pods.go:89] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:03.857593  270844 system_pods.go:89] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running
	I1225 19:02:03.857602  270844 system_pods.go:89] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:04.144426  270844 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:04.144463  270844 system_pods.go:89] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:04.144472  270844 system_pods.go:89] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running
	I1225 19:02:04.144485  270844 system_pods.go:89] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:04.144491  270844 system_pods.go:89] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running
	I1225 19:02:04.144497  270844 system_pods.go:89] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running
	I1225 19:02:04.144503  270844 system_pods.go:89] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:04.144508  270844 system_pods.go:89] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running
	I1225 19:02:04.144527  270844 system_pods.go:89] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:04.601160  270844 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:04.601193  270844 system_pods.go:89] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Running
	I1225 19:02:04.601201  270844 system_pods.go:89] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running
	I1225 19:02:04.601206  270844 system_pods.go:89] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:04.601211  270844 system_pods.go:89] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running
	I1225 19:02:04.601219  270844 system_pods.go:89] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running
	I1225 19:02:04.601224  270844 system_pods.go:89] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:04.601230  270844 system_pods.go:89] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running
	I1225 19:02:04.601235  270844 system_pods.go:89] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Running
	I1225 19:02:04.601245  270844 system_pods.go:126] duration metric: took 1.044103897s to wait for k8s-apps to be running ...
	I1225 19:02:04.601254  270844 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:02:04.601305  270844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:04.614777  270844 system_svc.go:56] duration metric: took 13.517356ms WaitForService to wait for kubelet
	I1225 19:02:04.614804  270844 kubeadm.go:587] duration metric: took 13.886479141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:04.614823  270844 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:02:04.617945  270844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:02:04.617973  270844 node_conditions.go:123] node cpu capacity is 8
	I1225 19:02:04.617990  270844 node_conditions.go:105] duration metric: took 3.161287ms to run NodePressure ...
	I1225 19:02:04.618001  270844 start.go:242] waiting for startup goroutines ...
	I1225 19:02:04.618008  270844 start.go:247] waiting for cluster config update ...
	I1225 19:02:04.618020  270844 start.go:256] writing updated cluster config ...
	I1225 19:02:04.618247  270844 ssh_runner.go:195] Run: rm -f paused
	I1225 19:02:04.622198  270844 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:04.625789  270844 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n4nqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:04.630139  270844 pod_ready.go:94] pod "coredns-66bc5c9577-n4nqj" is "Ready"
	I1225 19:02:04.630164  270844 pod_ready.go:86] duration metric: took 4.353963ms for pod "coredns-66bc5c9577-n4nqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:04.632373  270844 pod_ready.go:83] waiting for pod "etcd-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:04.636174  270844 pod_ready.go:94] pod "etcd-embed-certs-684693" is "Ready"
	I1225 19:02:04.636198  270844 pod_ready.go:86] duration metric: took 3.800757ms for pod "etcd-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:04.638028  270844 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:04.641827  270844 pod_ready.go:94] pod "kube-apiserver-embed-certs-684693" is "Ready"
	I1225 19:02:04.641847  270844 pod_ready.go:86] duration metric: took 3.798037ms for pod "kube-apiserver-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:04.643470  270844 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:05.027237  270844 pod_ready.go:94] pod "kube-controller-manager-embed-certs-684693" is "Ready"
	I1225 19:02:05.027260  270844 pod_ready.go:86] duration metric: took 383.76816ms for pod "kube-controller-manager-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:05.226454  270844 pod_ready.go:83] waiting for pod "kube-proxy-wzb26" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:05.627015  270844 pod_ready.go:94] pod "kube-proxy-wzb26" is "Ready"
	I1225 19:02:05.627049  270844 pod_ready.go:86] duration metric: took 400.563304ms for pod "kube-proxy-wzb26" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:05.826476  270844 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:06.226919  270844 pod_ready.go:94] pod "kube-scheduler-embed-certs-684693" is "Ready"
	I1225 19:02:06.226944  270844 pod_ready.go:86] duration metric: took 400.447046ms for pod "kube-scheduler-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:06.226956  270844 pod_ready.go:40] duration metric: took 1.604722021s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:06.270594  270844 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:02:06.272341  270844 out.go:179] * Done! kubectl is now configured to use "embed-certs-684693" cluster and "default" namespace by default
	I1225 19:02:01.694200  276130 addons.go:530] duration metric: took 2.87509814s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1225 19:02:01.694651  276130 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1225 19:02:01.694685  276130 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1225 19:02:02.190060  276130 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1225 19:02:02.195016  276130 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1225 19:02:02.196388  276130 api_server.go:141] control plane version: v1.28.0
	I1225 19:02:02.196413  276130 api_server.go:131] duration metric: took 507.001906ms to wait for apiserver health ...
	I1225 19:02:02.196422  276130 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:02:02.200861  276130 system_pods.go:59] 8 kube-system pods found
	I1225 19:02:02.200928  276130 system_pods.go:61] "coredns-5dd5756b68-chdzr" [e2ed39ee-6ff2-4de9-b2af-b355672afc97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:02.200947  276130 system_pods.go:61] "etcd-old-k8s-version-163446" [7efc5e80-e0a5-413e-8478-e0575bc25365] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:02.200961  276130 system_pods.go:61] "kindnet-krjfj" [d8ae6ebb-54be-4b65-93b2-6fca9646477f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:02.200972  276130 system_pods.go:61] "kube-apiserver-old-k8s-version-163446" [c9753f87-b38f-481a-be2f-32535eda08b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:02.200988  276130 system_pods.go:61] "kube-controller-manager-old-k8s-version-163446" [6b5c6b14-d11a-45ee-90cf-8da7bf599c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:02.200996  276130 system_pods.go:61] "kube-proxy-mxztf" [ac805838-ff33-483a-8b56-db2598a7c377] Running
	I1225 19:02:02.201005  276130 system_pods.go:61] "kube-scheduler-old-k8s-version-163446" [9ecbc23a-3334-40af-b2d2-739631afb06b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:02.201010  276130 system_pods.go:61] "storage-provisioner" [937361bb-febe-4584-8f22-755d06866089] Running
	I1225 19:02:02.201018  276130 system_pods.go:74] duration metric: took 4.590806ms to wait for pod list to return data ...
	I1225 19:02:02.201038  276130 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:02:02.202992  276130 default_sa.go:45] found service account: "default"
	I1225 19:02:02.203016  276130 default_sa.go:55] duration metric: took 1.968479ms for default service account to be created ...
	I1225 19:02:02.203026  276130 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:02:02.210873  276130 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:02.210921  276130 system_pods.go:89] "coredns-5dd5756b68-chdzr" [e2ed39ee-6ff2-4de9-b2af-b355672afc97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:02.210936  276130 system_pods.go:89] "etcd-old-k8s-version-163446" [7efc5e80-e0a5-413e-8478-e0575bc25365] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:02.210944  276130 system_pods.go:89] "kindnet-krjfj" [d8ae6ebb-54be-4b65-93b2-6fca9646477f] Running
	I1225 19:02:02.210953  276130 system_pods.go:89] "kube-apiserver-old-k8s-version-163446" [c9753f87-b38f-481a-be2f-32535eda08b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:02.210968  276130 system_pods.go:89] "kube-controller-manager-old-k8s-version-163446" [6b5c6b14-d11a-45ee-90cf-8da7bf599c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:02.210984  276130 system_pods.go:89] "kube-proxy-mxztf" [ac805838-ff33-483a-8b56-db2598a7c377] Running
	I1225 19:02:02.210991  276130 system_pods.go:89] "kube-scheduler-old-k8s-version-163446" [9ecbc23a-3334-40af-b2d2-739631afb06b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:02.211002  276130 system_pods.go:89] "storage-provisioner" [937361bb-febe-4584-8f22-755d06866089] Running
	I1225 19:02:02.211011  276130 system_pods.go:126] duration metric: took 7.979246ms to wait for k8s-apps to be running ...
	I1225 19:02:02.211025  276130 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:02:02.211083  276130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:02.228934  276130 system_svc.go:56] duration metric: took 17.900303ms WaitForService to wait for kubelet
	I1225 19:02:02.228969  276130 kubeadm.go:587] duration metric: took 3.40986613s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:02.229013  276130 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:02:02.234324  276130 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:02:02.234346  276130 node_conditions.go:123] node cpu capacity is 8
	I1225 19:02:02.234361  276130 node_conditions.go:105] duration metric: took 5.341408ms to run NodePressure ...
	I1225 19:02:02.234375  276130 start.go:242] waiting for startup goroutines ...
	I1225 19:02:02.234387  276130 start.go:247] waiting for cluster config update ...
	I1225 19:02:02.234402  276130 start.go:256] writing updated cluster config ...
	I1225 19:02:02.234664  276130 ssh_runner.go:195] Run: rm -f paused
	I1225 19:02:02.238438  276130 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:02.242484  276130 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-chdzr" in "kube-system" namespace to be "Ready" or be gone ...
	W1225 19:02:04.248814  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	I1225 19:02:02.984843  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:02:02.984917  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1225 19:02:06.749331  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	W1225 19:02:09.247786  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	W1225 19:02:11.248693  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	I1225 19:02:07.735618  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": read tcp 192.168.94.1:60570->192.168.94.2:8443: read: connection reset by peer
	I1225 19:02:07.735673  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:07.736339  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:07.982647  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:07.983090  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:08.483355  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:08.483685  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:08.983394  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:08.983847  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:09.483562  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:09.484030  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:09.982661  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:09.983124  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:10.482755  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:10.483143  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:10.982757  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:10.983173  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:11.482741  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:11.483183  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:11.982824  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:11.983302  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:12.482952  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:12.483358  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 25 19:02:03 embed-certs-684693 crio[778]: time="2025-12-25T19:02:03.589183993Z" level=info msg="Starting container: b823c0d2390b051693e55625c6ca75d6f9e00f014191d4c14d4d91d32cb13949" id=780b64fa-d38a-4ee9-95e6-005739ad5796 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:02:03 embed-certs-684693 crio[778]: time="2025-12-25T19:02:03.590976561Z" level=info msg="Started container" PID=1917 containerID=b823c0d2390b051693e55625c6ca75d6f9e00f014191d4c14d4d91d32cb13949 description=kube-system/coredns-66bc5c9577-n4nqj/coredns id=780b64fa-d38a-4ee9-95e6-005739ad5796 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ceb6cfa98843bab73562cc495760ab031ca5c02bcdfeca6d706b62525828592
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.723046541Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3979ed28-41b4-4c9a-8caf-dce4fe767f1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.723168695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.72946319Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:39f9d4cd1ccd6ebbfb6a69b365b697c1d41b2f9eba8bf0990c69cbc28b0c2ff9 UID:f8cdecb5-792b-4f73-bbd6-1c06cdaeb7bc NetNS:/var/run/netns/8e953ee9-756f-4ee9-acf1-06d90f48ef40 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000992d20}] Aliases:map[]}"
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.729490334Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.741262082Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:39f9d4cd1ccd6ebbfb6a69b365b697c1d41b2f9eba8bf0990c69cbc28b0c2ff9 UID:f8cdecb5-792b-4f73-bbd6-1c06cdaeb7bc NetNS:/var/run/netns/8e953ee9-756f-4ee9-acf1-06d90f48ef40 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000992d20}] Aliases:map[]}"
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.741407136Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.742370844Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.743570462Z" level=info msg="Ran pod sandbox 39f9d4cd1ccd6ebbfb6a69b365b697c1d41b2f9eba8bf0990c69cbc28b0c2ff9 with infra container: default/busybox/POD" id=3979ed28-41b4-4c9a-8caf-dce4fe767f1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.744861913Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8f7af0cf-347e-4dbd-b863-9e4c591fa548 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.745071699Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8f7af0cf-347e-4dbd-b863-9e4c591fa548 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.745123191Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8f7af0cf-347e-4dbd-b863-9e4c591fa548 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.745782915Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=711d3329-bb6c-4606-ac3c-f29b6b494e0b name=/runtime.v1.ImageService/PullImage
	Dec 25 19:02:06 embed-certs-684693 crio[778]: time="2025-12-25T19:02:06.747247952Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.942073224Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=711d3329-bb6c-4606-ac3c-f29b6b494e0b name=/runtime.v1.ImageService/PullImage
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.942672619Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=50398dd9-13dc-422b-b793-aa43d2391e30 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.943949336Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b35a0bb4-b764-42d0-9b82-312d39a0d044 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.947001833Z" level=info msg="Creating container: default/busybox/busybox" id=9e8440fc-2e8a-4bc9-b2c0-339b83f7fada name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.947122726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.950636537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.951230713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.984492225Z" level=info msg="Created container ce6c07dd86ccd7ae55795439e6946d344f1c47b588b39eab6fe3030df3a0e977: default/busybox/busybox" id=9e8440fc-2e8a-4bc9-b2c0-339b83f7fada name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.985308117Z" level=info msg="Starting container: ce6c07dd86ccd7ae55795439e6946d344f1c47b588b39eab6fe3030df3a0e977" id=b6d071d6-676f-4735-878c-4c8584cd1278 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:02:07 embed-certs-684693 crio[778]: time="2025-12-25T19:02:07.986996469Z" level=info msg="Started container" PID=1994 containerID=ce6c07dd86ccd7ae55795439e6946d344f1c47b588b39eab6fe3030df3a0e977 description=default/busybox/busybox id=b6d071d6-676f-4735-878c-4c8584cd1278 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39f9d4cd1ccd6ebbfb6a69b365b697c1d41b2f9eba8bf0990c69cbc28b0c2ff9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ce6c07dd86ccd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   39f9d4cd1ccd6       busybox                                      default
	b823c0d2390b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   9ceb6cfa98843       coredns-66bc5c9577-n4nqj                     kube-system
	33f9b9d1a70f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   29242e0841e20       storage-provisioner                          kube-system
	e631b3797fb98       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   c5550986ab0df       kindnet-gqdkf                                kube-system
	fb361a1c4377e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      23 seconds ago      Running             kube-proxy                0                   f146d43acb2a9       kube-proxy-wzb26                             kube-system
	a2cb80c069d7f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      34 seconds ago      Running             kube-controller-manager   0                   5a4fe8edef4eb       kube-controller-manager-embed-certs-684693   kube-system
	6293932baca05       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   3745069d9979f       etcd-embed-certs-684693                      kube-system
	7710881badd78       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      34 seconds ago      Running             kube-scheduler            0                   1ea8117d97afd       kube-scheduler-embed-certs-684693            kube-system
	50b64f0b6e92e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      34 seconds ago      Running             kube-apiserver            0                   2072b0295a6db       kube-apiserver-embed-certs-684693            kube-system
	
	
	==> coredns [b823c0d2390b051693e55625c6ca75d6f9e00f014191d4c14d4d91d32cb13949] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46192 - 25527 "HINFO IN 3823104293395888973.4621273477806958469. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023302311s
	
	
	==> describe nodes <==
	Name:               embed-certs-684693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-684693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=embed-certs-684693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_01_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-684693
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:02:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:02:03 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:02:03 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:02:03 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:02:03 +0000   Thu, 25 Dec 2025 19:02:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-684693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                23021cb7-5678-4260-b426-ee2032296d45
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-n4nqj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-684693                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-gqdkf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-684693             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-684693    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-wzb26                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-684693             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-684693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-684693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-684693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-684693 event: Registered Node embed-certs-684693 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-684693 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [6293932baca0594f65d5a71e86fd156713441c72dbbc6b34822cac98bcacffc5] <==
	{"level":"warn","ts":"2025-12-25T19:01:42.048071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.054529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.062623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.071209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.078560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.085091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.091337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.097757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.103930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.118999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.129000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.136448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.143663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.150291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.156883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.164187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.171756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.179108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.186551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.193210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.199652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.206994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.219615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.227075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:01:42.282483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43774","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:02:15 up 44 min,  0 user,  load average: 2.72, 2.43, 1.73
	Linux embed-certs-684693 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e631b3797fb987a886ea6c4faf15396e74b35db9b9fa5d17d00d2a6c7ee08e6a] <==
	I1225 19:01:52.737520       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:01:52.737780       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1225 19:01:52.737975       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:01:52.738006       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:01:52.738023       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:01:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:01:52.845609       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:01:52.936738       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:01:52.936762       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:01:52.937056       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:01:53.337067       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:01:53.337094       1 metrics.go:72] Registering metrics
	I1225 19:01:53.337159       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:02.846060       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:02:02.846100       1 main.go:301] handling current node
	I1225 19:02:12.848834       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:02:12.848950       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50b64f0b6e92ee5a8015650e31521c8b1550a735ed9f67b109aa2c422e86ebc9] <==
	I1225 19:01:42.764349       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:01:42.764505       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:01:42.764536       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1225 19:01:42.768731       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1225 19:01:42.774395       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 19:01:42.775098       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:42.950994       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:01:43.652990       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1225 19:01:43.656731       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1225 19:01:43.656749       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:01:44.083210       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:01:44.115391       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:01:44.156210       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 19:01:44.161029       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1225 19:01:44.161749       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:01:44.165280       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:01:44.679289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:01:45.252166       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:01:45.260361       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 19:01:45.267198       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1225 19:01:50.334396       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:50.337993       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:01:50.732505       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:01:50.784592       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1225 19:02:13.521411       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:36000: use of closed network connection
	
	
	==> kube-controller-manager [a2cb80c069d7f7571ae1d5f69e240724b1366600e052bd8b324e34d836651618] <==
	I1225 19:01:49.678869       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1225 19:01:49.678912       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1225 19:01:49.678937       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1225 19:01:49.678935       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1225 19:01:49.678973       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1225 19:01:49.678976       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 19:01:49.678983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 19:01:49.678993       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1225 19:01:49.679034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1225 19:01:49.679273       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1225 19:01:49.679363       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1225 19:01:49.679380       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1225 19:01:49.679498       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1225 19:01:49.679555       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1225 19:01:49.679591       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1225 19:01:49.679770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1225 19:01:49.681485       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1225 19:01:49.687973       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:01:49.688011       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1225 19:01:49.701931       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1225 19:01:49.701965       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1225 19:01:49.702049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1225 19:01:49.702060       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1225 19:01:49.704637       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 19:02:04.630538       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fb361a1c4377e4198d59ca510a749e301571511bccceac89e6bfd0686ff6bbde] <==
	I1225 19:01:51.229213       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:01:51.297662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 19:01:51.398414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 19:01:51.398445       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1225 19:01:51.398545       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:01:51.422645       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:01:51.422728       1 server_linux.go:132] "Using iptables Proxier"
	I1225 19:01:51.429859       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:01:51.430266       1 server.go:527] "Version info" version="v1.34.3"
	I1225 19:01:51.430504       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:01:51.433042       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:01:51.433061       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:01:51.433166       1 config.go:200] "Starting service config controller"
	I1225 19:01:51.433191       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:01:51.433513       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:01:51.434382       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:01:51.433644       1 config.go:309] "Starting node config controller"
	I1225 19:01:51.434514       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:01:51.434545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:01:51.533648       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 19:01:51.533701       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:01:51.534883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7710881badd78d174182badf193e346adae4826338b67cacb7bd75b0e63423d6] <==
	E1225 19:01:42.702155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 19:01:42.702204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1225 19:01:42.702226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1225 19:01:42.702217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 19:01:42.702278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1225 19:01:42.702321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 19:01:42.702336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1225 19:01:42.702367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 19:01:42.702385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 19:01:42.702438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1225 19:01:42.702531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 19:01:42.702555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1225 19:01:42.702567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1225 19:01:42.702620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 19:01:43.545779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1225 19:01:43.587923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 19:01:43.642313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 19:01:43.737190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1225 19:01:43.778331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1225 19:01:43.843820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1225 19:01:43.852807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 19:01:43.876113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 19:01:43.891470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 19:01:43.916547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1225 19:01:46.199455       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 19:01:46 embed-certs-684693 kubelet[1328]: I1225 19:01:46.120825    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-684693" podStartSLOduration=1.120789416 podStartE2EDuration="1.120789416s" podCreationTimestamp="2025-12-25 19:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:46.120787975 +0000 UTC m=+1.129916613" watchObservedRunningTime="2025-12-25 19:01:46.120789416 +0000 UTC m=+1.129918053"
	Dec 25 19:01:46 embed-certs-684693 kubelet[1328]: I1225 19:01:46.140521    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-684693" podStartSLOduration=1.140500622 podStartE2EDuration="1.140500622s" podCreationTimestamp="2025-12-25 19:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:46.131382435 +0000 UTC m=+1.140511071" watchObservedRunningTime="2025-12-25 19:01:46.140500622 +0000 UTC m=+1.149629259"
	Dec 25 19:01:46 embed-certs-684693 kubelet[1328]: I1225 19:01:46.148870    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-684693" podStartSLOduration=1.148850903 podStartE2EDuration="1.148850903s" podCreationTimestamp="2025-12-25 19:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:46.140676993 +0000 UTC m=+1.149805622" watchObservedRunningTime="2025-12-25 19:01:46.148850903 +0000 UTC m=+1.157979542"
	Dec 25 19:01:46 embed-certs-684693 kubelet[1328]: I1225 19:01:46.149078    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-684693" podStartSLOduration=1.149065644 podStartE2EDuration="1.149065644s" podCreationTimestamp="2025-12-25 19:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:46.148955647 +0000 UTC m=+1.158084307" watchObservedRunningTime="2025-12-25 19:01:46.149065644 +0000 UTC m=+1.158194281"
	Dec 25 19:01:49 embed-certs-684693 kubelet[1328]: I1225 19:01:49.702513    1328 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 25 19:01:49 embed-certs-684693 kubelet[1328]: I1225 19:01:49.703204    1328 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897106    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/655254fd-be22-4f04-a504-963b8b3da9f2-cni-cfg\") pod \"kindnet-gqdkf\" (UID: \"655254fd-be22-4f04-a504-963b8b3da9f2\") " pod="kube-system/kindnet-gqdkf"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897157    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/655254fd-be22-4f04-a504-963b8b3da9f2-xtables-lock\") pod \"kindnet-gqdkf\" (UID: \"655254fd-be22-4f04-a504-963b8b3da9f2\") " pod="kube-system/kindnet-gqdkf"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897180    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/655254fd-be22-4f04-a504-963b8b3da9f2-lib-modules\") pod \"kindnet-gqdkf\" (UID: \"655254fd-be22-4f04-a504-963b8b3da9f2\") " pod="kube-system/kindnet-gqdkf"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897200    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppcbk\" (UniqueName: \"kubernetes.io/projected/655254fd-be22-4f04-a504-963b8b3da9f2-kube-api-access-ppcbk\") pod \"kindnet-gqdkf\" (UID: \"655254fd-be22-4f04-a504-963b8b3da9f2\") " pod="kube-system/kindnet-gqdkf"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897263    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28372ff8-2832-49c8-b4ca-883af4201def-kube-proxy\") pod \"kube-proxy-wzb26\" (UID: \"28372ff8-2832-49c8-b4ca-883af4201def\") " pod="kube-system/kube-proxy-wzb26"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897290    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28372ff8-2832-49c8-b4ca-883af4201def-lib-modules\") pod \"kube-proxy-wzb26\" (UID: \"28372ff8-2832-49c8-b4ca-883af4201def\") " pod="kube-system/kube-proxy-wzb26"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897369    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95b9x\" (UniqueName: \"kubernetes.io/projected/28372ff8-2832-49c8-b4ca-883af4201def-kube-api-access-95b9x\") pod \"kube-proxy-wzb26\" (UID: \"28372ff8-2832-49c8-b4ca-883af4201def\") " pod="kube-system/kube-proxy-wzb26"
	Dec 25 19:01:50 embed-certs-684693 kubelet[1328]: I1225 19:01:50.897519    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28372ff8-2832-49c8-b4ca-883af4201def-xtables-lock\") pod \"kube-proxy-wzb26\" (UID: \"28372ff8-2832-49c8-b4ca-883af4201def\") " pod="kube-system/kube-proxy-wzb26"
	Dec 25 19:01:52 embed-certs-684693 kubelet[1328]: I1225 19:01:52.125320    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzb26" podStartSLOduration=2.125298404 podStartE2EDuration="2.125298404s" podCreationTimestamp="2025-12-25 19:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:01:52.124419181 +0000 UTC m=+7.133547819" watchObservedRunningTime="2025-12-25 19:01:52.125298404 +0000 UTC m=+7.134427041"
	Dec 25 19:01:53 embed-certs-684693 kubelet[1328]: I1225 19:01:53.836816    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gqdkf" podStartSLOduration=2.504552014 podStartE2EDuration="3.836794126s" podCreationTimestamp="2025-12-25 19:01:50 +0000 UTC" firstStartedPulling="2025-12-25 19:01:51.119926201 +0000 UTC m=+6.129054831" lastFinishedPulling="2025-12-25 19:01:52.452168327 +0000 UTC m=+7.461296943" observedRunningTime="2025-12-25 19:01:53.125414307 +0000 UTC m=+8.134542945" watchObservedRunningTime="2025-12-25 19:01:53.836794126 +0000 UTC m=+8.845922763"
	Dec 25 19:02:03 embed-certs-684693 kubelet[1328]: I1225 19:02:03.200056    1328 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 25 19:02:03 embed-certs-684693 kubelet[1328]: I1225 19:02:03.293363    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7ee71ac9-a69c-4669-b8f2-a60dc3dac91f-tmp\") pod \"storage-provisioner\" (UID: \"7ee71ac9-a69c-4669-b8f2-a60dc3dac91f\") " pod="kube-system/storage-provisioner"
	Dec 25 19:02:03 embed-certs-684693 kubelet[1328]: I1225 19:02:03.293418    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e02de70e-234a-4cf0-93f8-aac03bcce8cc-config-volume\") pod \"coredns-66bc5c9577-n4nqj\" (UID: \"e02de70e-234a-4cf0-93f8-aac03bcce8cc\") " pod="kube-system/coredns-66bc5c9577-n4nqj"
	Dec 25 19:02:03 embed-certs-684693 kubelet[1328]: I1225 19:02:03.293438    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkkn\" (UniqueName: \"kubernetes.io/projected/e02de70e-234a-4cf0-93f8-aac03bcce8cc-kube-api-access-6wkkn\") pod \"coredns-66bc5c9577-n4nqj\" (UID: \"e02de70e-234a-4cf0-93f8-aac03bcce8cc\") " pod="kube-system/coredns-66bc5c9577-n4nqj"
	Dec 25 19:02:03 embed-certs-684693 kubelet[1328]: I1225 19:02:03.293461    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx2x5\" (UniqueName: \"kubernetes.io/projected/7ee71ac9-a69c-4669-b8f2-a60dc3dac91f-kube-api-access-nx2x5\") pod \"storage-provisioner\" (UID: \"7ee71ac9-a69c-4669-b8f2-a60dc3dac91f\") " pod="kube-system/storage-provisioner"
	Dec 25 19:02:04 embed-certs-684693 kubelet[1328]: I1225 19:02:04.154532    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n4nqj" podStartSLOduration=14.154510418 podStartE2EDuration="14.154510418s" podCreationTimestamp="2025-12-25 19:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:02:04.153966906 +0000 UTC m=+19.163095541" watchObservedRunningTime="2025-12-25 19:02:04.154510418 +0000 UTC m=+19.163639056"
	Dec 25 19:02:04 embed-certs-684693 kubelet[1328]: I1225 19:02:04.166083    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.166066714 podStartE2EDuration="13.166066714s" podCreationTimestamp="2025-12-25 19:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:02:04.165744923 +0000 UTC m=+19.174873561" watchObservedRunningTime="2025-12-25 19:02:04.166066714 +0000 UTC m=+19.175195350"
	Dec 25 19:02:06 embed-certs-684693 kubelet[1328]: I1225 19:02:06.512606    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zzd\" (UniqueName: \"kubernetes.io/projected/f8cdecb5-792b-4f73-bbd6-1c06cdaeb7bc-kube-api-access-48zzd\") pod \"busybox\" (UID: \"f8cdecb5-792b-4f73-bbd6-1c06cdaeb7bc\") " pod="default/busybox"
	Dec 25 19:02:08 embed-certs-684693 kubelet[1328]: I1225 19:02:08.164528    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.966417944 podStartE2EDuration="2.164509106s" podCreationTimestamp="2025-12-25 19:02:06 +0000 UTC" firstStartedPulling="2025-12-25 19:02:06.745367163 +0000 UTC m=+21.754495806" lastFinishedPulling="2025-12-25 19:02:07.943458336 +0000 UTC m=+22.952586968" observedRunningTime="2025-12-25 19:02:08.164475036 +0000 UTC m=+23.173603674" watchObservedRunningTime="2025-12-25 19:02:08.164509106 +0000 UTC m=+23.173637739"
	
	
	==> storage-provisioner [33f9b9d1a70f3113a5d736b62b22197a74106583d964bfcd0d901840d43ff4fd] <==
	I1225 19:02:03.584956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:02:03.593606       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:02:03.593666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:02:03.596090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:03.600986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:02:03.601158       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:02:03.601311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-684693_75556f44-7062-4fc9-95ca-afce67459c71!
	I1225 19:02:03.601291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9f66672-eb7f-41d5-8fa8-7c79d48325e3", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-684693_75556f44-7062-4fc9-95ca-afce67459c71 became leader
	W1225 19:02:03.603290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:03.607148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:02:03.702301       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-684693_75556f44-7062-4fc9-95ca-afce67459c71!
	W1225 19:02:05.610651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:05.615097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:07.618057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:07.622875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:09.625871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:09.630032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:11.633616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:11.638365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:13.641765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:02:13.645949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-684693 -n embed-certs-684693
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-684693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-163446 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-163446 --alsologtostderr -v=1: exit status 80 (2.441207005s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-163446 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 19:02:53.654860  287588 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:02:53.654987  287588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:02:53.654999  287588 out.go:374] Setting ErrFile to fd 2...
	I1225 19:02:53.655004  287588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:02:53.655355  287588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:02:53.655644  287588 out.go:368] Setting JSON to false
	I1225 19:02:53.655670  287588 mustload.go:66] Loading cluster: old-k8s-version-163446
	I1225 19:02:53.656153  287588 config.go:182] Loaded profile config "old-k8s-version-163446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1225 19:02:53.656680  287588 cli_runner.go:164] Run: docker container inspect old-k8s-version-163446 --format={{.State.Status}}
	I1225 19:02:53.677583  287588 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:02:53.677971  287588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:02:53.740460  287588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-25 19:02:53.730248531 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:02:53.749359  287588 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766570787-22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766570787-22316-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-163446 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1225 19:02:53.780491  287588 out.go:179] * Pausing node old-k8s-version-163446 ... 
	I1225 19:02:53.781507  287588 host.go:66] Checking if "old-k8s-version-163446" exists ...
	I1225 19:02:53.781801  287588 ssh_runner.go:195] Run: systemctl --version
	I1225 19:02:53.781849  287588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163446
	I1225 19:02:53.802575  287588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/old-k8s-version-163446/id_rsa Username:docker}
	I1225 19:02:53.896023  287588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:53.922710  287588 pause.go:52] kubelet running: true
	I1225 19:02:53.922802  287588 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:02:54.095775  287588 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:02:54.095857  287588 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:02:54.167791  287588 cri.go:96] found id: "4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b"
	I1225 19:02:54.167809  287588 cri.go:96] found id: "ccffe0a74970948877693b5a337809301f8eb0c24483e7ad98ec3964e8a6ee9d"
	I1225 19:02:54.167813  287588 cri.go:96] found id: "d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82"
	I1225 19:02:54.167816  287588 cri.go:96] found id: "511e075a73b0123446e15801390ee877057b17d9055b6b3110d706ac86692627"
	I1225 19:02:54.167819  287588 cri.go:96] found id: "376a01fa2f5cd87c0dae38ad74332c0ae0c0d93fa441f19a90ff655c9ac8f482"
	I1225 19:02:54.167831  287588 cri.go:96] found id: "b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4"
	I1225 19:02:54.167836  287588 cri.go:96] found id: "739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67"
	I1225 19:02:54.167839  287588 cri.go:96] found id: "c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552"
	I1225 19:02:54.167843  287588 cri.go:96] found id: "b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd"
	I1225 19:02:54.167850  287588 cri.go:96] found id: "ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	I1225 19:02:54.167856  287588 cri.go:96] found id: "e37efd9b2c0f4e3339db38b105725fe701ef12b037a5a8d35c075b3f754150c7"
	I1225 19:02:54.167864  287588 cri.go:96] found id: ""
	I1225 19:02:54.167928  287588 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:02:54.179841  287588 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:54Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:02:54.424326  287588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:54.437211  287588 pause.go:52] kubelet running: false
	I1225 19:02:54.437257  287588 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:02:54.584732  287588 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:02:54.584850  287588 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:02:54.652750  287588 cri.go:96] found id: "4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b"
	I1225 19:02:54.652771  287588 cri.go:96] found id: "ccffe0a74970948877693b5a337809301f8eb0c24483e7ad98ec3964e8a6ee9d"
	I1225 19:02:54.652775  287588 cri.go:96] found id: "d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82"
	I1225 19:02:54.652778  287588 cri.go:96] found id: "511e075a73b0123446e15801390ee877057b17d9055b6b3110d706ac86692627"
	I1225 19:02:54.652781  287588 cri.go:96] found id: "376a01fa2f5cd87c0dae38ad74332c0ae0c0d93fa441f19a90ff655c9ac8f482"
	I1225 19:02:54.652784  287588 cri.go:96] found id: "b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4"
	I1225 19:02:54.652786  287588 cri.go:96] found id: "739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67"
	I1225 19:02:54.652789  287588 cri.go:96] found id: "c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552"
	I1225 19:02:54.652793  287588 cri.go:96] found id: "b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd"
	I1225 19:02:54.652800  287588 cri.go:96] found id: "ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	I1225 19:02:54.652805  287588 cri.go:96] found id: "e37efd9b2c0f4e3339db38b105725fe701ef12b037a5a8d35c075b3f754150c7"
	I1225 19:02:54.652809  287588 cri.go:96] found id: ""
	I1225 19:02:54.652851  287588 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:02:55.014614  287588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:55.027547  287588 pause.go:52] kubelet running: false
	I1225 19:02:55.027594  287588 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:02:55.166761  287588 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:02:55.166826  287588 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:02:55.233654  287588 cri.go:96] found id: "4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b"
	I1225 19:02:55.233672  287588 cri.go:96] found id: "ccffe0a74970948877693b5a337809301f8eb0c24483e7ad98ec3964e8a6ee9d"
	I1225 19:02:55.233676  287588 cri.go:96] found id: "d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82"
	I1225 19:02:55.233679  287588 cri.go:96] found id: "511e075a73b0123446e15801390ee877057b17d9055b6b3110d706ac86692627"
	I1225 19:02:55.233682  287588 cri.go:96] found id: "376a01fa2f5cd87c0dae38ad74332c0ae0c0d93fa441f19a90ff655c9ac8f482"
	I1225 19:02:55.233685  287588 cri.go:96] found id: "b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4"
	I1225 19:02:55.233687  287588 cri.go:96] found id: "739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67"
	I1225 19:02:55.233690  287588 cri.go:96] found id: "c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552"
	I1225 19:02:55.233698  287588 cri.go:96] found id: "b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd"
	I1225 19:02:55.233703  287588 cri.go:96] found id: "ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	I1225 19:02:55.233705  287588 cri.go:96] found id: "e37efd9b2c0f4e3339db38b105725fe701ef12b037a5a8d35c075b3f754150c7"
	I1225 19:02:55.233708  287588 cri.go:96] found id: ""
	I1225 19:02:55.233743  287588 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:02:55.790106  287588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:55.803106  287588 pause.go:52] kubelet running: false
	I1225 19:02:55.803167  287588 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:02:55.944432  287588 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:02:55.944515  287588 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:02:56.010367  287588 cri.go:96] found id: "4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b"
	I1225 19:02:56.010392  287588 cri.go:96] found id: "ccffe0a74970948877693b5a337809301f8eb0c24483e7ad98ec3964e8a6ee9d"
	I1225 19:02:56.010402  287588 cri.go:96] found id: "d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82"
	I1225 19:02:56.010407  287588 cri.go:96] found id: "511e075a73b0123446e15801390ee877057b17d9055b6b3110d706ac86692627"
	I1225 19:02:56.010411  287588 cri.go:96] found id: "376a01fa2f5cd87c0dae38ad74332c0ae0c0d93fa441f19a90ff655c9ac8f482"
	I1225 19:02:56.010416  287588 cri.go:96] found id: "b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4"
	I1225 19:02:56.010420  287588 cri.go:96] found id: "739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67"
	I1225 19:02:56.010424  287588 cri.go:96] found id: "c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552"
	I1225 19:02:56.010429  287588 cri.go:96] found id: "b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd"
	I1225 19:02:56.010435  287588 cri.go:96] found id: "ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	I1225 19:02:56.010440  287588 cri.go:96] found id: "e37efd9b2c0f4e3339db38b105725fe701ef12b037a5a8d35c075b3f754150c7"
	I1225 19:02:56.010445  287588 cri.go:96] found id: ""
	I1225 19:02:56.010486  287588 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:02:56.023709  287588 out.go:203] 
	W1225 19:02:56.024787  287588 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 19:02:56.024805  287588 out.go:285] * 
	* 
	W1225 19:02:56.026636  287588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 19:02:56.027729  287588 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-163446 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-163446
helpers_test.go:244: (dbg) docker inspect old-k8s-version-163446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05",
	        "Created": "2025-12-25T19:00:38.731521693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:01:51.961086062Z",
	            "FinishedAt": "2025-12-25T19:01:50.931804842Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/hostname",
	        "HostsPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/hosts",
	        "LogPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05-json.log",
	        "Name": "/old-k8s-version-163446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163446:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-163446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05",
	                "LowerDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163446",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163446",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "399b9f1b98e16b80d31e8c5b0795c6f562eed3a6df436c25c4f911b60ca7d8f7",
	            "SandboxKey": "/var/run/docker/netns/399b9f1b98e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-163446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6b6e067d0596f86d64c9b68f4f95f2e3f9026a738d9a6486ac091374c416820",
	                    "EndpointID": "6b70cdf41c433d8e06cdfe30d961233762dd32fdc117cc9d7b24bc02164dadcb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:c9:c7:86:22:55",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-163446",
	                        "37396ae2407e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446: exit status 2 (327.865283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163446 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-163446 logs -n 25: (1.079773743s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p stopped-upgrade-746190                                                                                                                                                                                                                     │ stopped-upgrade-746190    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ stop    │ -p kubernetes-upgrade-498224 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                 │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │                     │
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                               │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:02:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:02:34.332240  283722 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:02:34.332340  283722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:02:34.332351  283722 out.go:374] Setting ErrFile to fd 2...
	I1225 19:02:34.332356  283722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:02:34.332559  283722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:02:34.333051  283722 out.go:368] Setting JSON to false
	I1225 19:02:34.334249  283722 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2702,"bootTime":1766686652,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:02:34.334303  283722 start.go:143] virtualization: kvm guest
	I1225 19:02:34.336161  283722 out.go:179] * [embed-certs-684693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:02:34.337358  283722 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:02:34.337377  283722 notify.go:221] Checking for updates...
	I1225 19:02:34.339392  283722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:02:34.340487  283722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:02:34.341696  283722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:02:34.342911  283722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:02:34.344133  283722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:02:34.345601  283722 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:02:34.346165  283722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:02:34.368350  283722 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:02:34.368437  283722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:02:34.430944  283722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:02:34.421454451 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:02:34.431052  283722 docker.go:319] overlay module found
	I1225 19:02:34.433366  283722 out.go:179] * Using the docker driver based on existing profile
	I1225 19:02:34.434389  283722 start.go:309] selected driver: docker
	I1225 19:02:34.434403  283722 start.go:928] validating driver "docker" against &{Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:02:34.434484  283722 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:02:34.435062  283722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:02:34.496223  283722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:02:34.485758437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:02:34.496551  283722 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:34.496584  283722 cni.go:84] Creating CNI manager for ""
	I1225 19:02:34.496654  283722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:02:34.496710  283722 start.go:353] cluster config:
	{Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:02:34.498661  283722 out.go:179] * Starting "embed-certs-684693" primary control-plane node in "embed-certs-684693" cluster
	I1225 19:02:34.499767  283722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:02:34.500841  283722 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:02:34.501745  283722 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:02:34.501774  283722 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:02:34.501782  283722 cache.go:65] Caching tarball of preloaded images
	I1225 19:02:34.501832  283722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:02:34.501848  283722 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:02:34.501855  283722 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:02:34.502014  283722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/config.json ...
	I1225 19:02:34.522061  283722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:02:34.522083  283722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:02:34.522117  283722 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:02:34.522151  283722 start.go:360] acquireMachinesLock for embed-certs-684693: {Name:mkcef018e2fd6119543ae4deda4e408dabf7b389 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:02:34.522238  283722 start.go:364] duration metric: took 50.604µs to acquireMachinesLock for "embed-certs-684693"
	I1225 19:02:34.522271  283722 start.go:96] Skipping create...Using existing machine configuration
	I1225 19:02:34.522282  283722 fix.go:54] fixHost starting: 
	I1225 19:02:34.522528  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:34.540267  283722 fix.go:112] recreateIfNeeded on embed-certs-684693: state=Stopped err=<nil>
	W1225 19:02:34.540317  283722 fix.go:138] unexpected machine state, will restart: <nil>
	W1225 19:02:32.250455  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	W1225 19:02:34.748137  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	I1225 19:02:32.709561  281279 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:02:32.713814  281279 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:02:32.714806  281279 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:02:32.714851  281279 api_server.go:131] duration metric: took 1.006292879s to wait for apiserver health ...
	I1225 19:02:32.714860  281279 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:02:32.718354  281279 system_pods.go:59] 8 kube-system pods found
	I1225 19:02:32.718397  281279 system_pods.go:61] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:32.718415  281279 system_pods.go:61] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:32.718426  281279 system_pods.go:61] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:32.718440  281279 system_pods.go:61] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:32.718452  281279 system_pods.go:61] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:32.718466  281279 system_pods.go:61] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:02:32.718482  281279 system_pods.go:61] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:32.718493  281279 system_pods.go:61] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:32.718501  281279 system_pods.go:74] duration metric: took 3.635053ms to wait for pod list to return data ...
	I1225 19:02:32.718511  281279 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:02:32.720677  281279 default_sa.go:45] found service account: "default"
	I1225 19:02:32.720695  281279 default_sa.go:55] duration metric: took 2.176461ms for default service account to be created ...
	I1225 19:02:32.720702  281279 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:02:32.723119  281279 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:32.723143  281279 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:32.723150  281279 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:32.723181  281279 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:32.723188  281279 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:32.723197  281279 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:32.723202  281279 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:02:32.723216  281279 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:32.723224  281279 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:32.723236  281279 system_pods.go:126] duration metric: took 2.529355ms to wait for k8s-apps to be running ...
	I1225 19:02:32.723244  281279 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:02:32.723283  281279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:32.736188  281279 system_svc.go:56] duration metric: took 12.935551ms WaitForService to wait for kubelet
	I1225 19:02:32.736219  281279 kubeadm.go:587] duration metric: took 3.193223437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:32.736245  281279 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:02:32.738810  281279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:02:32.738830  281279 node_conditions.go:123] node cpu capacity is 8
	I1225 19:02:32.738843  281279 node_conditions.go:105] duration metric: took 2.58894ms to run NodePressure ...
	I1225 19:02:32.738854  281279 start.go:242] waiting for startup goroutines ...
	I1225 19:02:32.738863  281279 start.go:247] waiting for cluster config update ...
	I1225 19:02:32.738876  281279 start.go:256] writing updated cluster config ...
	I1225 19:02:32.739169  281279 ssh_runner.go:195] Run: rm -f paused
	I1225 19:02:32.742992  281279 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:32.746740  281279 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	W1225 19:02:34.752237  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:36.753074  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:34.542000  283722 out.go:252] * Restarting existing docker container for "embed-certs-684693" ...
	I1225 19:02:34.542086  283722 cli_runner.go:164] Run: docker start embed-certs-684693
	I1225 19:02:34.793389  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:34.810888  283722 kic.go:430] container "embed-certs-684693" state is running.
	I1225 19:02:34.811290  283722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-684693
	I1225 19:02:34.831852  283722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/config.json ...
	I1225 19:02:34.832148  283722 machine.go:94] provisionDockerMachine start ...
	I1225 19:02:34.832232  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:34.850382  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:34.850645  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:34.850670  283722 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:02:34.851245  283722 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38634->127.0.0.1:33083: read: connection reset by peer
	I1225 19:02:37.988590  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-684693
	
	I1225 19:02:37.988620  283722 ubuntu.go:182] provisioning hostname "embed-certs-684693"
	I1225 19:02:37.988683  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.011764  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:38.012106  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:38.012128  283722 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-684693 && echo "embed-certs-684693" | sudo tee /etc/hostname
	I1225 19:02:38.164002  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-684693
	
	I1225 19:02:38.164082  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.186808  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:38.187128  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:38.187156  283722 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-684693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-684693/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-684693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:02:38.325407  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:02:38.325438  283722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:02:38.325493  283722 ubuntu.go:190] setting up certificates
	I1225 19:02:38.325503  283722 provision.go:84] configureAuth start
	I1225 19:02:38.325557  283722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-684693
	I1225 19:02:38.348271  283722 provision.go:143] copyHostCerts
	I1225 19:02:38.348352  283722 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:02:38.348372  283722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:02:38.348453  283722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:02:38.348589  283722 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:02:38.348602  283722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:02:38.348644  283722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:02:38.348742  283722 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:02:38.348753  283722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:02:38.348797  283722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:02:38.348889  283722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.embed-certs-684693 san=[127.0.0.1 192.168.76.2 embed-certs-684693 localhost minikube]
	I1225 19:02:38.432213  283722 provision.go:177] copyRemoteCerts
	I1225 19:02:38.432280  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:02:38.432331  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.453474  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:38.554595  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:02:38.576396  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1225 19:02:38.596992  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 19:02:38.617721  283722 provision.go:87] duration metric: took 292.206468ms to configureAuth
	I1225 19:02:38.617752  283722 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:02:38.617972  283722 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:02:38.618089  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.641021  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:38.641292  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:38.641320  283722 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:02:39.715305  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:02:39.715333  283722 machine.go:97] duration metric: took 4.883166339s to provisionDockerMachine
	I1225 19:02:39.715350  283722 start.go:293] postStartSetup for "embed-certs-684693" (driver="docker")
	I1225 19:02:39.715364  283722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:02:39.715441  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:02:39.715501  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:39.740065  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:39.843885  283722 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:02:39.848425  283722 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:02:39.848455  283722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:02:39.848467  283722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:02:39.848524  283722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:02:39.848635  283722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:02:39.848783  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:02:39.858728  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:02:39.881703  283722 start.go:296] duration metric: took 166.337995ms for postStartSetup
	I1225 19:02:39.881788  283722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:02:39.881839  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:39.905842  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:40.010304  283722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:02:40.015690  283722 fix.go:56] duration metric: took 5.493403102s for fixHost
	I1225 19:02:40.015722  283722 start.go:83] releasing machines lock for "embed-certs-684693", held for 5.493472094s
	I1225 19:02:40.015792  283722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-684693
	I1225 19:02:40.038762  283722 ssh_runner.go:195] Run: cat /version.json
	I1225 19:02:40.038822  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:40.038841  283722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:02:40.038947  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:40.062655  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:40.065695  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:40.157087  283722 ssh_runner.go:195] Run: systemctl --version
	I1225 19:02:40.229059  283722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:02:40.277374  283722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:02:40.282823  283722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:02:40.282886  283722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:02:40.291041  283722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:02:40.291063  283722 start.go:496] detecting cgroup driver to use...
	I1225 19:02:40.291096  283722 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:02:40.291153  283722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:02:40.306232  283722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:02:40.318913  283722 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:02:40.318974  283722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:02:40.337150  283722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:02:40.353726  283722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:02:40.454327  283722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:02:40.533845  283722 docker.go:234] disabling docker service ...
	I1225 19:02:40.533928  283722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:02:40.548103  283722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:02:40.560536  283722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:02:40.640861  283722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:02:40.723298  283722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:02:40.735960  283722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:02:40.750674  283722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:02:40.750756  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.759687  283722 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:02:40.759744  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.768619  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.776920  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.785326  283722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:02:40.793469  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.802341  283722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.810328  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.819077  283722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:02:40.826284  283722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:02:40.833207  283722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:02:40.910503  283722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:02:41.525244  283722 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:02:41.525315  283722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:02:41.529300  283722 start.go:574] Will wait 60s for crictl version
	I1225 19:02:41.529353  283722 ssh_runner.go:195] Run: which crictl
	I1225 19:02:41.533043  283722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:02:41.557927  283722 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:02:41.558033  283722 ssh_runner.go:195] Run: crio --version
	I1225 19:02:41.586469  283722 ssh_runner.go:195] Run: crio --version
	I1225 19:02:41.615377  283722 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1225 19:02:36.749283  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	W1225 19:02:39.249184  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	I1225 19:02:40.250399  276130 pod_ready.go:94] pod "coredns-5dd5756b68-chdzr" is "Ready"
	I1225 19:02:40.250430  276130 pod_ready.go:86] duration metric: took 38.007925686s for pod "coredns-5dd5756b68-chdzr" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.254322  276130 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.260487  276130 pod_ready.go:94] pod "etcd-old-k8s-version-163446" is "Ready"
	I1225 19:02:40.260516  276130 pod_ready.go:86] duration metric: took 6.166204ms for pod "etcd-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.264076  276130 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.268500  276130 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-163446" is "Ready"
	I1225 19:02:40.268521  276130 pod_ready.go:86] duration metric: took 4.418592ms for pod "kube-apiserver-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.271820  276130 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.445948  276130 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-163446" is "Ready"
	I1225 19:02:40.445979  276130 pod_ready.go:86] duration metric: took 174.135469ms for pod "kube-controller-manager-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.646252  276130 pod_ready.go:83] waiting for pod "kube-proxy-mxztf" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.046220  276130 pod_ready.go:94] pod "kube-proxy-mxztf" is "Ready"
	I1225 19:02:41.046246  276130 pod_ready.go:86] duration metric: took 399.972902ms for pod "kube-proxy-mxztf" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.246867  276130 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.646838  276130 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-163446" is "Ready"
	I1225 19:02:41.646872  276130 pod_ready.go:86] duration metric: took 399.980482ms for pod "kube-scheduler-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.646890  276130 pod_ready.go:40] duration metric: took 39.408421641s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:41.696334  276130 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1225 19:02:41.697651  276130 out.go:203] 
	W1225 19:02:41.698979  276130 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1225 19:02:41.700133  276130 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1225 19:02:41.701308  276130 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-163446" cluster and "default" namespace by default
	W1225 19:02:38.760252  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:41.251779  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:39.959075  260034 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.580193996s)
	W1225 19:02:39.959128  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:50714->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:50714->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1225 19:02:39.959139  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:39.959177  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:40.005485  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:40.005519  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:40.043514  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:40.043550  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:40.079879  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:40.079928  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:40.115370  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:40.115405  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:40.201145  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:40.201185  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:40.235170  260034 logs.go:123] Gathering logs for kube-apiserver [44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23] ...
	I1225 19:02:40.235198  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23"
	W1225 19:02:40.266051  260034 logs.go:130] failed kube-apiserver [44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23": Process exited with status 1
	stdout:
	
	stderr:
	E1225 19:02:40.263563    1990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist" containerID="44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23"
	time="2025-12-25T19:02:40Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1225 19:02:40.263563    1990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist" containerID="44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23"
	time="2025-12-25T19:02:40Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist"
	
	** /stderr **
	I1225 19:02:40.266070  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:40.266081  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:40.314117  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:40.314147  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:41.616763  283722 cli_runner.go:164] Run: docker network inspect embed-certs-684693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:02:41.634843  283722 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1225 19:02:41.638869  283722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:02:41.650549  283722 kubeadm.go:884] updating cluster {Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:02:41.650690  283722 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:02:41.650753  283722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:02:41.688485  283722 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:02:41.688505  283722 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:02:41.688547  283722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:02:41.714798  283722 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:02:41.714820  283722 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:02:41.714835  283722 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1225 19:02:41.714964  283722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-684693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:02:41.715039  283722 ssh_runner.go:195] Run: crio config
	I1225 19:02:41.768162  283722 cni.go:84] Creating CNI manager for ""
	I1225 19:02:41.768186  283722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:02:41.768201  283722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:02:41.768275  283722 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-684693 NodeName:embed-certs-684693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:02:41.768465  283722 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-684693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:02:41.768549  283722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:02:41.777579  283722 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:02:41.777639  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:02:41.786430  283722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1225 19:02:41.799614  283722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:02:41.813093  283722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1225 19:02:41.827054  283722 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:02:41.830776  283722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:02:41.841065  283722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:02:41.923904  283722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:02:41.951478  283722 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693 for IP: 192.168.76.2
	I1225 19:02:41.951499  283722 certs.go:195] generating shared ca certs ...
	I1225 19:02:41.951517  283722 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:41.951691  283722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:02:41.951758  283722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:02:41.951770  283722 certs.go:257] generating profile certs ...
	I1225 19:02:41.951883  283722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/client.key
	I1225 19:02:41.951982  283722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/apiserver.key.7d2dd373
	I1225 19:02:41.952032  283722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/proxy-client.key
	I1225 19:02:41.952168  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:02:41.952213  283722 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:02:41.952225  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:02:41.952259  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:02:41.952296  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:02:41.952329  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:02:41.952390  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:02:41.954169  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:02:41.976087  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:02:42.004209  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:02:42.026214  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:02:42.051456  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1225 19:02:42.070827  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 19:02:42.090229  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:02:42.107431  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 19:02:42.124629  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:02:42.142144  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:02:42.159938  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:02:42.177820  283722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:02:42.190192  283722 ssh_runner.go:195] Run: openssl version
	I1225 19:02:42.196349  283722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.204337  283722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:02:42.211812  283722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.215821  283722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.215879  283722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.252652  283722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:02:42.260755  283722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.268248  283722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:02:42.275834  283722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.279512  283722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.279566  283722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.314793  283722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:02:42.322677  283722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.330378  283722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:02:42.338291  283722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.342039  283722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.342086  283722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.376088  283722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:02:42.383483  283722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:02:42.387192  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:02:42.421140  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:02:42.455674  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:02:42.504470  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:02:42.549706  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:02:42.598462  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:02:42.645140  283722 kubeadm.go:401] StartCluster: {Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:02:42.645223  283722 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:02:42.645296  283722 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:02:42.677285  283722 cri.go:96] found id: "8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818"
	I1225 19:02:42.677315  283722 cri.go:96] found id: "8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d"
	I1225 19:02:42.677322  283722 cri.go:96] found id: "f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238"
	I1225 19:02:42.677327  283722 cri.go:96] found id: "96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d"
	I1225 19:02:42.677331  283722 cri.go:96] found id: ""
	I1225 19:02:42.677390  283722 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:02:42.691054  283722 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:42Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:02:42.691135  283722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:02:42.699848  283722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:02:42.699868  283722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:02:42.699928  283722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:02:42.707825  283722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:02:42.708605  283722 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-684693" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:02:42.709197  283722 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-684693" cluster setting kubeconfig missing "embed-certs-684693" context setting]
	I1225 19:02:42.709842  283722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:42.711591  283722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:02:42.719379  283722 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1225 19:02:42.719403  283722 kubeadm.go:602] duration metric: took 19.530409ms to restartPrimaryControlPlane
	I1225 19:02:42.719411  283722 kubeadm.go:403] duration metric: took 74.282356ms to StartCluster
	I1225 19:02:42.719426  283722 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:42.719492  283722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:02:42.721450  283722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:42.721725  283722 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:02:42.721786  283722 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:02:42.721909  283722 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-684693"
	I1225 19:02:42.721939  283722 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-684693"
	W1225 19:02:42.721950  283722 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:02:42.721980  283722 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:02:42.722007  283722 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:02:42.722066  283722 addons.go:70] Setting dashboard=true in profile "embed-certs-684693"
	I1225 19:02:42.722083  283722 addons.go:239] Setting addon dashboard=true in "embed-certs-684693"
	W1225 19:02:42.722090  283722 addons.go:248] addon dashboard should already be in state true
	I1225 19:02:42.722116  283722 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:02:42.722510  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.722579  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.722672  283722 addons.go:70] Setting default-storageclass=true in profile "embed-certs-684693"
	I1225 19:02:42.722694  283722 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-684693"
	I1225 19:02:42.722988  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.723522  283722 out.go:179] * Verifying Kubernetes components...
	I1225 19:02:42.724623  283722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:02:42.748101  283722 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:02:42.748216  283722 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:02:42.749489  283722 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:02:42.749509  283722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:02:42.749606  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:42.750777  283722 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1225 19:02:42.750946  283722 addons.go:239] Setting addon default-storageclass=true in "embed-certs-684693"
	W1225 19:02:42.750969  283722 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:02:42.750996  283722 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:02:42.751539  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.752863  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:02:42.752881  283722 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:02:42.752966  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:42.787641  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:42.787671  283722 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:02:42.787787  283722 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:02:42.787859  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:42.790135  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:42.812493  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:42.890653  283722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:02:42.900287  283722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:02:42.903183  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:02:42.903204  283722 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:02:42.911978  283722 node_ready.go:35] waiting up to 6m0s for node "embed-certs-684693" to be "Ready" ...
	I1225 19:02:42.920472  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:02:42.920498  283722 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:02:42.923373  283722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:02:42.943729  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:02:42.943755  283722 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:02:42.963551  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:02:42.963576  283722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:02:42.982558  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:02:42.982575  283722 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:02:42.999301  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:02:42.999373  283722 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:02:43.016589  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:02:43.016615  283722 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:02:43.033331  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:02:43.033357  283722 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:02:43.049617  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:02:43.049640  283722 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:02:43.063573  283722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:02:44.592329  283722 node_ready.go:49] node "embed-certs-684693" is "Ready"
	I1225 19:02:44.592368  283722 node_ready.go:38] duration metric: took 1.680338472s for node "embed-certs-684693" to be "Ready" ...
	I1225 19:02:44.592387  283722 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:02:44.592444  283722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:02:45.119446  283722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.219122023s)
	I1225 19:02:45.119487  283722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.196089746s)
	I1225 19:02:45.119669  283722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.056061114s)
	I1225 19:02:45.119726  283722 api_server.go:72] duration metric: took 2.397967807s to wait for apiserver process to appear ...
	I1225 19:02:45.119772  283722 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:02:45.119794  283722 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:45.121085  283722 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-684693 addons enable metrics-server
	
	I1225 19:02:45.126537  283722 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:02:45.126577  283722 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:02:45.133051  283722 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1225 19:02:43.252798  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:45.752853  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:42.845373  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:42.845820  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:42.845875  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:42.845996  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:42.879236  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:42.879257  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:42.879264  260034 cri.go:96] found id: ""
	I1225 19:02:42.879271  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:42.879320  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:42.884110  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:42.888950  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:42.889027  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:42.932017  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:42.932040  260034 cri.go:96] found id: ""
	I1225 19:02:42.932057  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:42.932110  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:42.937106  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:42.937170  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:42.973822  260034 cri.go:96] found id: ""
	I1225 19:02:42.973849  260034 logs.go:282] 0 containers: []
	W1225 19:02:42.973859  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:42.973866  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:42.973935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:43.007466  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:43.007489  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:43.007601  260034 cri.go:96] found id: ""
	I1225 19:02:43.007630  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:43.007693  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.012837  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.018073  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:43.018146  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:43.051689  260034 cri.go:96] found id: ""
	I1225 19:02:43.051713  260034 logs.go:282] 0 containers: []
	W1225 19:02:43.051723  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:43.051738  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:43.051843  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:43.085693  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:43.085732  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:43.085738  260034 cri.go:96] found id: ""
	I1225 19:02:43.085747  260034 logs.go:282] 2 containers: [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:43.085920  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.090459  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.094950  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:43.095018  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:43.129401  260034 cri.go:96] found id: ""
	I1225 19:02:43.129427  260034 logs.go:282] 0 containers: []
	W1225 19:02:43.129435  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:43.129493  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:43.129555  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:43.160140  260034 cri.go:96] found id: ""
	I1225 19:02:43.160168  260034 logs.go:282] 0 containers: []
	W1225 19:02:43.160181  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:43.160208  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:43.160227  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:43.217277  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:43.217294  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:43.217305  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:43.253875  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:43.253919  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:43.289320  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:43.289347  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:43.360025  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:43.360064  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:43.391698  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:43.391740  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:43.425941  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:43.425974  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:43.451809  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:43.451836  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:43.480244  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:43.480269  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:43.508518  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:43.508546  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:43.537208  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:43.537242  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:43.623054  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:43.623094  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:46.140967  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:46.141334  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:46.141390  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:46.141462  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:46.172790  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:46.172813  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:46.172819  260034 cri.go:96] found id: ""
	I1225 19:02:46.172828  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:46.172889  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.177222  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.181021  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:46.181083  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:46.209343  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:46.209371  260034 cri.go:96] found id: ""
	I1225 19:02:46.209380  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:46.209456  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.213577  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:46.213647  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:46.242055  260034 cri.go:96] found id: ""
	I1225 19:02:46.242081  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.242092  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:46.242100  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:46.242163  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:46.271150  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:46.271182  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:46.271189  260034 cri.go:96] found id: ""
	I1225 19:02:46.271200  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:46.271265  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.275579  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.279164  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:46.279230  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:46.317615  260034 cri.go:96] found id: ""
	I1225 19:02:46.317639  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.317647  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:46.317655  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:46.317726  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:46.345511  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:46.345532  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:46.345536  260034 cri.go:96] found id: ""
	I1225 19:02:46.345542  260034 logs.go:282] 2 containers: [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:46.345596  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.349615  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.353289  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:46.353345  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:46.379347  260034 cri.go:96] found id: ""
	I1225 19:02:46.379378  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.379390  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:46.379398  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:46.379456  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:46.406076  260034 cri.go:96] found id: ""
	I1225 19:02:46.406103  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.406111  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:46.406120  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:46.406130  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:46.419479  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:46.419518  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:46.475425  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:46.475443  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:46.475453  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:46.501954  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:46.501986  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:46.529233  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:46.529264  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:46.578649  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:46.578689  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:46.615466  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:46.615502  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:46.701574  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:46.701605  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:46.735031  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:46.735066  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:46.774453  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:46.774478  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:46.806331  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:46.806357  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:46.834357  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:46.834383  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:45.136321  283722 addons.go:530] duration metric: took 2.414540777s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1225 19:02:45.619971  283722 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:45.624831  283722 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:02:45.624857  283722 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:02:46.119996  283722 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:46.124768  283722 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1225 19:02:46.125737  283722 api_server.go:141] control plane version: v1.34.3
	I1225 19:02:46.125763  283722 api_server.go:131] duration metric: took 1.005983234s to wait for apiserver health ...
	I1225 19:02:46.125773  283722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:02:46.128670  283722 system_pods.go:59] 8 kube-system pods found
	I1225 19:02:46.128705  283722 system_pods.go:61] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:46.128713  283722 system_pods.go:61] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:46.128724  283722 system_pods.go:61] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:46.128730  283722 system_pods.go:61] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:46.128736  283722 system_pods.go:61] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:46.128745  283722 system_pods.go:61] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:46.128753  283722 system_pods.go:61] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:46.128758  283722 system_pods.go:61] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Running
	I1225 19:02:46.128767  283722 system_pods.go:74] duration metric: took 2.986964ms to wait for pod list to return data ...
	I1225 19:02:46.128775  283722 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:02:46.130955  283722 default_sa.go:45] found service account: "default"
	I1225 19:02:46.130979  283722 default_sa.go:55] duration metric: took 2.197529ms for default service account to be created ...
	I1225 19:02:46.130986  283722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:02:46.133301  283722 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:46.133324  283722 system_pods.go:89] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:46.133332  283722 system_pods.go:89] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:46.133337  283722 system_pods.go:89] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:46.133347  283722 system_pods.go:89] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:46.133361  283722 system_pods.go:89] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:46.133365  283722 system_pods.go:89] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:46.133370  283722 system_pods.go:89] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:46.133373  283722 system_pods.go:89] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Running
	I1225 19:02:46.133380  283722 system_pods.go:126] duration metric: took 2.389428ms to wait for k8s-apps to be running ...
	I1225 19:02:46.133386  283722 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:02:46.133426  283722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:46.147334  283722 system_svc.go:56] duration metric: took 13.940563ms WaitForService to wait for kubelet
	I1225 19:02:46.147364  283722 kubeadm.go:587] duration metric: took 3.425608177s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:46.147386  283722 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:02:46.150394  283722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:02:46.150422  283722 node_conditions.go:123] node cpu capacity is 8
	I1225 19:02:46.150438  283722 node_conditions.go:105] duration metric: took 3.045786ms to run NodePressure ...
	I1225 19:02:46.150455  283722 start.go:242] waiting for startup goroutines ...
	I1225 19:02:46.150468  283722 start.go:247] waiting for cluster config update ...
	I1225 19:02:46.150484  283722 start.go:256] writing updated cluster config ...
	I1225 19:02:46.150769  283722 ssh_runner.go:195] Run: rm -f paused
	I1225 19:02:46.154707  283722 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:46.158471  283722 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n4nqj" in "kube-system" namespace to be "Ready" or be gone ...
	W1225 19:02:48.166327  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:02:48.251567  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:50.253392  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:49.361398  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:49.361870  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:49.361956  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:49.362018  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:49.398444  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:49.398471  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:49.398477  260034 cri.go:96] found id: ""
	I1225 19:02:49.398487  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:49.398560  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.403776  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.409053  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:49.409117  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:49.443463  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:49.443492  260034 cri.go:96] found id: ""
	I1225 19:02:49.443502  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:49.443561  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.448740  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:49.448807  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:49.483163  260034 cri.go:96] found id: ""
	I1225 19:02:49.483191  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.483203  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:49.483210  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:49.483270  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:49.523558  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:49.523583  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:49.523589  260034 cri.go:96] found id: ""
	I1225 19:02:49.523599  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:49.523656  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.529651  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.534440  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:49.534514  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:49.569461  260034 cri.go:96] found id: ""
	I1225 19:02:49.569487  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.569498  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:49.569505  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:49.569582  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:49.603791  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:49.603818  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:49.603824  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:49.603833  260034 cri.go:96] found id: ""
	I1225 19:02:49.603842  260034 logs.go:282] 3 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:49.603913  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.608932  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.613461  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.618371  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:49.618474  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:49.652528  260034 cri.go:96] found id: ""
	I1225 19:02:49.652562  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.652573  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:49.652580  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:49.652640  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:49.690860  260034 cri.go:96] found id: ""
	I1225 19:02:49.690888  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.690911  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:49.690923  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:49.690937  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:49.758816  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:49.758859  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:49.795978  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:02:49.796019  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:49.831406  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:49.831438  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:49.872802  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:49.872838  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:49.953150  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:49.953176  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:49.953190  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:49.982648  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:49.982682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:50.009816  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:50.009841  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:50.035707  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:50.035736  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:50.093790  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:50.093827  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:50.194464  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:50.194502  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:50.212181  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:50.212212  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:50.248646  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:50.248682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	W1225 19:02:50.664148  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:02:52.664384  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 25 19:02:19 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:19.256721597Z" level=info msg="Started container" PID=1742 containerID=dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper id=e9bbab68-2d64-4c5d-bbad-e4f828d26e14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed31591ef3633165d9da7dd0fa0d1effb0c331079a542e6130150f8162e5e5f2
	Dec 25 19:02:20 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:20.217938847Z" level=info msg="Removing container: 853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854" id=3e3cdde9-f1c4-4b34-b795-25b09a398b03 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:20 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:20.227497073Z" level=info msg="Removed container 853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=3e3cdde9-f1c4-4b34-b795-25b09a398b03 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.248019848Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0cc7bcb8-4571-4f92-abbf-b751d5c22d37 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.249405499Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=624d1249-c99d-4893-8b6d-6c0f4d440cd0 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.251831348Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=938c57b0-1948-46df-bc54-f2faaba880de name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.252121384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258240184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258430496Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/db30fdad163134de9ff6722eebee77220d016321b035e760269a5032e93db16b/merged/etc/passwd: no such file or directory"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258460372Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/db30fdad163134de9ff6722eebee77220d016321b035e760269a5032e93db16b/merged/etc/group: no such file or directory"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258765586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.288656922Z" level=info msg="Created container 4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b: kube-system/storage-provisioner/storage-provisioner" id=938c57b0-1948-46df-bc54-f2faaba880de name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.289280063Z" level=info msg="Starting container: 4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b" id=26da47cf-8fe6-4c01-ae0a-093986da1327 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.291377518Z" level=info msg="Started container" PID=1758 containerID=4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b description=kube-system/storage-provisioner/storage-provisioner id=26da47cf-8fe6-4c01-ae0a-093986da1327 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c99ce6c9f6e774697cb76b2f90f3cfc96a5f6e7a8235ee1d45e10a318861c6aa
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.143820902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=273bfeb9-40e4-4a3f-87b6-c2fe80d6ac8f name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.144767902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=591c4535-969a-4e7f-b8ab-0c60981929b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.145759577Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=fe944e54-5c3a-4709-97d2-eef871920404 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.145890862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.151661702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.152180243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.185986808Z" level=info msg="Created container ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=fe944e54-5c3a-4709-97d2-eef871920404 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.186631517Z" level=info msg="Starting container: ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533" id=598fd0c0-839f-41a5-887f-a31e1c29a3b0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.188498093Z" level=info msg="Started container" PID=1774 containerID=ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper id=598fd0c0-839f-41a5-887f-a31e1c29a3b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed31591ef3633165d9da7dd0fa0d1effb0c331079a542e6130150f8162e5e5f2
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.263421261Z" level=info msg="Removing container: dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88" id=d6bb0370-eb52-4cbb-9712-0809cd0c1a50 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.274268031Z" level=info msg="Removed container dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=d6bb0370-eb52-4cbb-9712-0809cd0c1a50 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ea767d69b5c8b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   ed31591ef3633       dashboard-metrics-scraper-5f989dc9cf-7fb8k       kubernetes-dashboard
	4ce1005c7b592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   c99ce6c9f6e77       storage-provisioner                              kube-system
	e37efd9b2c0f4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   681ad6f331185       kubernetes-dashboard-8694d4445c-9sffb            kubernetes-dashboard
	ccffe0a749709       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   640c3d286e54a       coredns-5dd5756b68-chdzr                         kube-system
	6f9ee785f7e06       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   057743e8e2dbd       busybox                                          default
	d25ed4ed70040       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   c99ce6c9f6e77       storage-provisioner                              kube-system
	511e075a73b01       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 0                   39f9bd9a7f7b5       kindnet-krjfj                                    kube-system
	376a01fa2f5cd       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   e5e0661513f15       kube-proxy-mxztf                                 kube-system
	b4b49a940b58f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   33211a7ded48e       kube-apiserver-old-k8s-version-163446            kube-system
	739051af3cadd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   ae42aaf9c0479       etcd-old-k8s-version-163446                      kube-system
	c1c1926bfed12       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   95be73706a6b2       kube-controller-manager-old-k8s-version-163446   kube-system
	b66569b95e263       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   c537bdd069a78       kube-scheduler-old-k8s-version-163446            kube-system
	
	
	==> coredns [ccffe0a74970948877693b5a337809301f8eb0c24483e7ad98ec3964e8a6ee9d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33306 - 28751 "HINFO IN 5159646874572025505.42273120866640963. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.037363138s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-163446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-163446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=old-k8s-version-163446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_00_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-163446
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:01:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-163446
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0cc28420-dcfc-4f7d-abe6-5c56c5c91736
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-chdzr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-163446                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-krjfj                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-163446             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-163446    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-mxztf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-163446             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7fb8k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-9sffb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-163446 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-163446 event: Registered Node old-k8s-version-163446 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-163446 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x9 over 59s)  kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-163446 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)  kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-163446 event: Registered Node old-k8s-version-163446 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67] <==
	{"level":"info","ts":"2025-12-25T19:01:58.706442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-25T19:01:58.706585Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:01:58.706695Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-25T19:01:58.706801Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:01:58.70683Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:01:58.7066Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:01:58.708055Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-25T19:01:58.708164Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-25T19:01:58.708205Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-25T19:01:58.708399Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-25T19:01:58.708429Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-25T19:01:59.798077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:59.798117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:59.798164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:59.798178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.798183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.79821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.798217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.799675Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:01:59.799711Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:01:59.799666Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-163446 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:01:59.799875Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:01:59.799913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:01:59.800883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-25T19:01:59.800889Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:02:57 up 45 min,  0 user,  load average: 2.56, 2.41, 1.75
	Linux old-k8s-version-163446 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511e075a73b0123446e15801390ee877057b17d9055b6b3110d706ac86692627] <==
	I1225 19:02:01.665888       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:02:01.666157       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1225 19:02:01.666339       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:02:01.666363       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:02:01.666387       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:02:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:02:01.959646       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:02:01.959826       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:02:01.959984       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:02:02.059679       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:02:02.459888       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:02:02.459946       1 metrics.go:72] Registering metrics
	I1225 19:02:02.460029       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:11.868042       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:11.868111       1 main.go:301] handling current node
	I1225 19:02:21.868612       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:21.868668       1 main.go:301] handling current node
	I1225 19:02:31.867984       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:31.868037       1 main.go:301] handling current node
	I1225 19:02:41.869023       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:41.869314       1 main.go:301] handling current node
	I1225 19:02:51.869094       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:51.869159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4] <==
	I1225 19:02:00.746662       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1225 19:02:00.784726       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1225 19:02:00.784761       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1225 19:02:00.784771       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1225 19:02:00.784782       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1225 19:02:00.784802       1 aggregator.go:166] initial CRD sync complete...
	I1225 19:02:00.784815       1 autoregister_controller.go:141] Starting autoregister controller
	I1225 19:02:00.784820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:02:00.784738       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1225 19:02:00.784832       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:02:00.784741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:02:00.788440       1 shared_informer.go:318] Caches are synced for configmaps
	E1225 19:02:00.790340       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1225 19:02:01.567091       1 controller.go:624] quota admission added evaluator for: namespaces
	I1225 19:02:01.601207       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1225 19:02:01.617541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:02:01.626660       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:02:01.634431       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1225 19:02:01.668243       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.102.80"}
	I1225 19:02:01.682927       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.13.17"}
	I1225 19:02:01.683283       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:02:13.415502       1 controller.go:624] quota admission added evaluator for: endpoints
	I1225 19:02:13.415547       1 controller.go:624] quota admission added evaluator for: endpoints
	I1225 19:02:13.416127       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:02:13.440037       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552] <==
	I1225 19:02:13.466874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.877069ms"
	I1225 19:02:13.467663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="24.035516ms"
	I1225 19:02:13.474623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.674579ms"
	I1225 19:02:13.474622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.908763ms"
	I1225 19:02:13.474794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.414µs"
	I1225 19:02:13.474800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.481µs"
	I1225 19:02:13.479875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.278µs"
	I1225 19:02:13.488602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.529µs"
	I1225 19:02:13.520027       1 shared_informer.go:318] Caches are synced for disruption
	I1225 19:02:13.555513       1 shared_informer.go:318] Caches are synced for crt configmap
	I1225 19:02:13.573840       1 shared_informer.go:318] Caches are synced for resource quota
	I1225 19:02:13.625873       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1225 19:02:13.637987       1 shared_informer.go:318] Caches are synced for resource quota
	I1225 19:02:13.951237       1 shared_informer.go:318] Caches are synced for garbage collector
	I1225 19:02:13.952352       1 shared_informer.go:318] Caches are synced for garbage collector
	I1225 19:02:13.952382       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1225 19:02:17.230432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.646371ms"
	I1225 19:02:17.230517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.364µs"
	I1225 19:02:19.226293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.326µs"
	I1225 19:02:20.227969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.416µs"
	I1225 19:02:21.230010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.55µs"
	I1225 19:02:37.274639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.544µs"
	I1225 19:02:39.925668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.180442ms"
	I1225 19:02:39.925795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.825µs"
	I1225 19:02:43.779869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.76µs"
	
	
	==> kube-proxy [376a01fa2f5cd87c0dae38ad74332c0ae0c0d93fa441f19a90ff655c9ac8f482] <==
	I1225 19:02:01.539246       1 server_others.go:69] "Using iptables proxy"
	I1225 19:02:01.550139       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1225 19:02:01.569877       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:02:01.572672       1 server_others.go:152] "Using iptables Proxier"
	I1225 19:02:01.572703       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1225 19:02:01.572710       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1225 19:02:01.572733       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 19:02:01.572950       1 server.go:846] "Version info" version="v1.28.0"
	I1225 19:02:01.572964       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:01.573566       1 config.go:97] "Starting endpoint slice config controller"
	I1225 19:02:01.573605       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 19:02:01.573634       1 config.go:188] "Starting service config controller"
	I1225 19:02:01.573647       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 19:02:01.573845       1 config.go:315] "Starting node config controller"
	I1225 19:02:01.573861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 19:02:01.673707       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 19:02:01.673731       1 shared_informer.go:318] Caches are synced for service config
	I1225 19:02:01.673941       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd] <==
	I1225 19:01:59.029075       1 serving.go:348] Generated self-signed cert in-memory
	I1225 19:02:00.744373       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1225 19:02:00.744396       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:00.747792       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1225 19:02:00.747816       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1225 19:02:00.747820       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:00.747839       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 19:02:00.747942       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:00.747979       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1225 19:02:00.749685       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1225 19:02:00.749814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 19:02:00.848214       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1225 19:02:00.848244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 19:02:00.848246       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.464545     733 topology_manager.go:215] "Topology Admit Handler" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-7fb8k"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587090     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qwzj\" (UniqueName: \"kubernetes.io/projected/8670172f-1b60-424f-b7a5-cf89fb165120-kube-api-access-7qwzj\") pod \"kubernetes-dashboard-8694d4445c-9sffb\" (UID: \"8670172f-1b60-424f-b7a5-cf89fb165120\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9sffb"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587174     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8670172f-1b60-424f-b7a5-cf89fb165120-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-9sffb\" (UID: \"8670172f-1b60-424f-b7a5-cf89fb165120\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9sffb"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587236     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbbf\" (UniqueName: \"kubernetes.io/projected/38c48988-6be8-47e5-a66b-4c0f3bc3dbea-kube-api-access-8lbbf\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fb8k\" (UID: \"38c48988-6be8-47e5-a66b-4c0f3bc3dbea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587323     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/38c48988-6be8-47e5-a66b-4c0f3bc3dbea-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fb8k\" (UID: \"38c48988-6be8-47e5-a66b-4c0f3bc3dbea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k"
	Dec 25 19:02:19 old-k8s-version-163446 kubelet[733]: I1225 19:02:19.213359     733 scope.go:117] "RemoveContainer" containerID="853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854"
	Dec 25 19:02:19 old-k8s-version-163446 kubelet[733]: I1225 19:02:19.226498     733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9sffb" podStartSLOduration=3.328246562 podCreationTimestamp="2025-12-25 19:02:13 +0000 UTC" firstStartedPulling="2025-12-25 19:02:13.790234842 +0000 UTC m=+15.736993272" lastFinishedPulling="2025-12-25 19:02:16.68844283 +0000 UTC m=+18.635201265" observedRunningTime="2025-12-25 19:02:17.220581012 +0000 UTC m=+19.167339471" watchObservedRunningTime="2025-12-25 19:02:19.226454555 +0000 UTC m=+21.173212991"
	Dec 25 19:02:20 old-k8s-version-163446 kubelet[733]: I1225 19:02:20.216751     733 scope.go:117] "RemoveContainer" containerID="853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854"
	Dec 25 19:02:20 old-k8s-version-163446 kubelet[733]: I1225 19:02:20.216961     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:20 old-k8s-version-163446 kubelet[733]: E1225 19:02:20.217325     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:21 old-k8s-version-163446 kubelet[733]: I1225 19:02:21.220469     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:21 old-k8s-version-163446 kubelet[733]: E1225 19:02:21.220840     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:23 old-k8s-version-163446 kubelet[733]: I1225 19:02:23.767502     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:23 old-k8s-version-163446 kubelet[733]: E1225 19:02:23.767827     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:32 old-k8s-version-163446 kubelet[733]: I1225 19:02:32.247484     733 scope.go:117] "RemoveContainer" containerID="d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: I1225 19:02:37.143267     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: I1225 19:02:37.262133     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: I1225 19:02:37.262400     733 scope.go:117] "RemoveContainer" containerID="ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: E1225 19:02:37.262792     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:43 old-k8s-version-163446 kubelet[733]: I1225 19:02:43.766887     733 scope.go:117] "RemoveContainer" containerID="ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	Dec 25 19:02:43 old-k8s-version-163446 kubelet[733]: E1225 19:02:43.767324     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: kubelet.service: Consumed 1.546s CPU time.
	
	
	==> kubernetes-dashboard [e37efd9b2c0f4e3339db38b105725fe701ef12b037a5a8d35c075b3f754150c7] <==
	2025/12/25 19:02:16 Using namespace: kubernetes-dashboard
	2025/12/25 19:02:16 Using in-cluster config to connect to apiserver
	2025/12/25 19:02:16 Using secret token for csrf signing
	2025/12/25 19:02:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:02:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:02:16 Successful initial request to the apiserver, version: v1.28.0
	2025/12/25 19:02:16 Generating JWE encryption key
	2025/12/25 19:02:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:02:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:02:16 Initializing JWE encryption key from synchronized object
	2025/12/25 19:02:16 Creating in-cluster Sidecar client
	2025/12/25 19:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:16 Serving insecurely on HTTP port: 9090
	2025/12/25 19:02:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:16 Starting overwatch
	
	
	==> storage-provisioner [4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b] <==
	I1225 19:02:32.304668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:02:32.314203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:02:32.314245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 19:02:49.715173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:02:49.715284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f853f802-d45c-4cc9-a8ea-2b9b3cbed157", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-163446_61d5a32b-67aa-4448-8cf1-69ec15ea9eac became leader
	I1225 19:02:49.715361       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-163446_61d5a32b-67aa-4448-8cf1-69ec15ea9eac!
	I1225 19:02:49.815643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-163446_61d5a32b-67aa-4448-8cf1-69ec15ea9eac!
	
	
	==> storage-provisioner [d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82] <==
	I1225 19:02:01.520724       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:02:31.525242       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163446 -n old-k8s-version-163446
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163446 -n old-k8s-version-163446: exit status 2 (317.346165ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-163446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-163446
helpers_test.go:244: (dbg) docker inspect old-k8s-version-163446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05",
	        "Created": "2025-12-25T19:00:38.731521693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:01:51.961086062Z",
	            "FinishedAt": "2025-12-25T19:01:50.931804842Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/hostname",
	        "HostsPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/hosts",
	        "LogPath": "/var/lib/docker/containers/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05/37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05-json.log",
	        "Name": "/old-k8s-version-163446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163446:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-163446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37396ae2407e2231768404ec79c8765ad89338beefc37987d4c4bd842f074e05",
	                "LowerDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da66b1259c79665422104588e6a075c075b8c19dd9bb347e3c8d2431d2f57222/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163446",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163446",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "399b9f1b98e16b80d31e8c5b0795c6f562eed3a6df436c25c4f911b60ca7d8f7",
	            "SandboxKey": "/var/run/docker/netns/399b9f1b98e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-163446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6b6e067d0596f86d64c9b68f4f95f2e3f9026a738d9a6486ac091374c416820",
	                    "EndpointID": "6b70cdf41c433d8e06cdfe30d961233762dd32fdc117cc9d7b24bc02164dadcb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:c9:c7:86:22:55",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-163446",
	                        "37396ae2407e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446: exit status 2 (326.513673ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163446 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-163446 logs -n 25: (1.134567993s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p test-preload-632730                                                                                                                                                                                                                        │ test-preload-632730       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ delete  │ -p stopped-upgrade-746190                                                                                                                                                                                                                     │ stopped-upgrade-746190    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ stop    │ -p kubernetes-upgrade-498224 --alsologtostderr                                                                                                                                                                                                │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:00 UTC │
	│ start   │ -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                 │ kubernetes-upgrade-498224 │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │                     │
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352         │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693        │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                               │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-163446    │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:02:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:02:34.332240  283722 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:02:34.332340  283722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:02:34.332351  283722 out.go:374] Setting ErrFile to fd 2...
	I1225 19:02:34.332356  283722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:02:34.332559  283722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:02:34.333051  283722 out.go:368] Setting JSON to false
	I1225 19:02:34.334249  283722 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2702,"bootTime":1766686652,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:02:34.334303  283722 start.go:143] virtualization: kvm guest
	I1225 19:02:34.336161  283722 out.go:179] * [embed-certs-684693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:02:34.337358  283722 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:02:34.337377  283722 notify.go:221] Checking for updates...
	I1225 19:02:34.339392  283722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:02:34.340487  283722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:02:34.341696  283722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:02:34.342911  283722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:02:34.344133  283722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:02:34.345601  283722 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:02:34.346165  283722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:02:34.368350  283722 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:02:34.368437  283722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:02:34.430944  283722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:02:34.421454451 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:02:34.431052  283722 docker.go:319] overlay module found
	I1225 19:02:34.433366  283722 out.go:179] * Using the docker driver based on existing profile
	I1225 19:02:34.434389  283722 start.go:309] selected driver: docker
	I1225 19:02:34.434403  283722 start.go:928] validating driver "docker" against &{Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:02:34.434484  283722 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:02:34.435062  283722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:02:34.496223  283722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:02:34.485758437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:02:34.496551  283722 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:34.496584  283722 cni.go:84] Creating CNI manager for ""
	I1225 19:02:34.496654  283722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:02:34.496710  283722 start.go:353] cluster config:
	{Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:02:34.498661  283722 out.go:179] * Starting "embed-certs-684693" primary control-plane node in "embed-certs-684693" cluster
	I1225 19:02:34.499767  283722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:02:34.500841  283722 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:02:34.501745  283722 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:02:34.501774  283722 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:02:34.501782  283722 cache.go:65] Caching tarball of preloaded images
	I1225 19:02:34.501832  283722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:02:34.501848  283722 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:02:34.501855  283722 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:02:34.502014  283722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/config.json ...
	I1225 19:02:34.522061  283722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:02:34.522083  283722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:02:34.522117  283722 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:02:34.522151  283722 start.go:360] acquireMachinesLock for embed-certs-684693: {Name:mkcef018e2fd6119543ae4deda4e408dabf7b389 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:02:34.522238  283722 start.go:364] duration metric: took 50.604µs to acquireMachinesLock for "embed-certs-684693"
	I1225 19:02:34.522271  283722 start.go:96] Skipping create...Using existing machine configuration
	I1225 19:02:34.522282  283722 fix.go:54] fixHost starting: 
	I1225 19:02:34.522528  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:34.540267  283722 fix.go:112] recreateIfNeeded on embed-certs-684693: state=Stopped err=<nil>
	W1225 19:02:34.540317  283722 fix.go:138] unexpected machine state, will restart: <nil>
	W1225 19:02:32.250455  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	W1225 19:02:34.748137  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	I1225 19:02:32.709561  281279 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:02:32.713814  281279 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:02:32.714806  281279 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:02:32.714851  281279 api_server.go:131] duration metric: took 1.006292879s to wait for apiserver health ...
	I1225 19:02:32.714860  281279 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:02:32.718354  281279 system_pods.go:59] 8 kube-system pods found
	I1225 19:02:32.718397  281279 system_pods.go:61] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:32.718415  281279 system_pods.go:61] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:32.718426  281279 system_pods.go:61] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:32.718440  281279 system_pods.go:61] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:32.718452  281279 system_pods.go:61] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:32.718466  281279 system_pods.go:61] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:02:32.718482  281279 system_pods.go:61] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:32.718493  281279 system_pods.go:61] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:32.718501  281279 system_pods.go:74] duration metric: took 3.635053ms to wait for pod list to return data ...
	I1225 19:02:32.718511  281279 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:02:32.720677  281279 default_sa.go:45] found service account: "default"
	I1225 19:02:32.720695  281279 default_sa.go:55] duration metric: took 2.176461ms for default service account to be created ...
	I1225 19:02:32.720702  281279 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:02:32.723119  281279 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:32.723143  281279 system_pods.go:89] "coredns-7d764666f9-lqvms" [87fc533e-6490-4d36-a61b-a754a22afd56] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:32.723150  281279 system_pods.go:89] "etcd-no-preload-148352" [07fbfda5-ced9-48bb-819a-27d7a9d3c8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:32.723181  281279 system_pods.go:89] "kindnet-jx25d" [25f416b3-e74e-4d6e-9b1b-d4ddf07659c4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:32.723188  281279 system_pods.go:89] "kube-apiserver-no-preload-148352" [9bec5758-56c2-488b-8593-35fcdb4ec786] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:32.723197  281279 system_pods.go:89] "kube-controller-manager-no-preload-148352" [b44b6979-c22b-402f-8ce0-fabd78630461] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:32.723202  281279 system_pods.go:89] "kube-proxy-j2p4x" [ae9faca6-3e41-4e10-ae96-b7a397c3be75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:02:32.723216  281279 system_pods.go:89] "kube-scheduler-no-preload-148352" [6dcf4763-851f-4d07-b708-4b5a579c03cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:32.723224  281279 system_pods.go:89] "storage-provisioner" [4caa74a1-bb32-45a7-9cc3-d0af791be23e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:02:32.723236  281279 system_pods.go:126] duration metric: took 2.529355ms to wait for k8s-apps to be running ...
	I1225 19:02:32.723244  281279 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:02:32.723283  281279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:32.736188  281279 system_svc.go:56] duration metric: took 12.935551ms WaitForService to wait for kubelet
	I1225 19:02:32.736219  281279 kubeadm.go:587] duration metric: took 3.193223437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:32.736245  281279 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:02:32.738810  281279 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:02:32.738830  281279 node_conditions.go:123] node cpu capacity is 8
	I1225 19:02:32.738843  281279 node_conditions.go:105] duration metric: took 2.58894ms to run NodePressure ...
	I1225 19:02:32.738854  281279 start.go:242] waiting for startup goroutines ...
	I1225 19:02:32.738863  281279 start.go:247] waiting for cluster config update ...
	I1225 19:02:32.738876  281279 start.go:256] writing updated cluster config ...
	I1225 19:02:32.739169  281279 ssh_runner.go:195] Run: rm -f paused
	I1225 19:02:32.742992  281279 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:32.746740  281279 pod_ready.go:83] waiting for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	W1225 19:02:34.752237  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:36.753074  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:34.542000  283722 out.go:252] * Restarting existing docker container for "embed-certs-684693" ...
	I1225 19:02:34.542086  283722 cli_runner.go:164] Run: docker start embed-certs-684693
	I1225 19:02:34.793389  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:34.810888  283722 kic.go:430] container "embed-certs-684693" state is running.
	I1225 19:02:34.811290  283722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-684693
	I1225 19:02:34.831852  283722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/config.json ...
	I1225 19:02:34.832148  283722 machine.go:94] provisionDockerMachine start ...
	I1225 19:02:34.832232  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:34.850382  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:34.850645  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:34.850670  283722 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:02:34.851245  283722 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38634->127.0.0.1:33083: read: connection reset by peer
	I1225 19:02:37.988590  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-684693
	
	I1225 19:02:37.988620  283722 ubuntu.go:182] provisioning hostname "embed-certs-684693"
	I1225 19:02:37.988683  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.011764  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:38.012106  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:38.012128  283722 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-684693 && echo "embed-certs-684693" | sudo tee /etc/hostname
	I1225 19:02:38.164002  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-684693
	
	I1225 19:02:38.164082  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.186808  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:38.187128  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:38.187156  283722 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-684693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-684693/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-684693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:02:38.325407  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:02:38.325438  283722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:02:38.325493  283722 ubuntu.go:190] setting up certificates
	I1225 19:02:38.325503  283722 provision.go:84] configureAuth start
	I1225 19:02:38.325557  283722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-684693
	I1225 19:02:38.348271  283722 provision.go:143] copyHostCerts
	I1225 19:02:38.348352  283722 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:02:38.348372  283722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:02:38.348453  283722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:02:38.348589  283722 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:02:38.348602  283722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:02:38.348644  283722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:02:38.348742  283722 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:02:38.348753  283722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:02:38.348797  283722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:02:38.348889  283722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.embed-certs-684693 san=[127.0.0.1 192.168.76.2 embed-certs-684693 localhost minikube]
	I1225 19:02:38.432213  283722 provision.go:177] copyRemoteCerts
	I1225 19:02:38.432280  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:02:38.432331  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.453474  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:38.554595  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:02:38.576396  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1225 19:02:38.596992  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 19:02:38.617721  283722 provision.go:87] duration metric: took 292.206468ms to configureAuth
	I1225 19:02:38.617752  283722 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:02:38.617972  283722 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:02:38.618089  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:38.641021  283722 main.go:144] libmachine: Using SSH client type: native
	I1225 19:02:38.641292  283722 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1225 19:02:38.641320  283722 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:02:39.715305  283722 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:02:39.715333  283722 machine.go:97] duration metric: took 4.883166339s to provisionDockerMachine
	I1225 19:02:39.715350  283722 start.go:293] postStartSetup for "embed-certs-684693" (driver="docker")
	I1225 19:02:39.715364  283722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:02:39.715441  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:02:39.715501  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:39.740065  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:39.843885  283722 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:02:39.848425  283722 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:02:39.848455  283722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:02:39.848467  283722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:02:39.848524  283722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:02:39.848635  283722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:02:39.848783  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:02:39.858728  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:02:39.881703  283722 start.go:296] duration metric: took 166.337995ms for postStartSetup
	I1225 19:02:39.881788  283722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:02:39.881839  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:39.905842  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:40.010304  283722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:02:40.015690  283722 fix.go:56] duration metric: took 5.493403102s for fixHost
	I1225 19:02:40.015722  283722 start.go:83] releasing machines lock for "embed-certs-684693", held for 5.493472094s
	I1225 19:02:40.015792  283722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-684693
	I1225 19:02:40.038762  283722 ssh_runner.go:195] Run: cat /version.json
	I1225 19:02:40.038822  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:40.038841  283722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:02:40.038947  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:40.062655  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:40.065695  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:40.157087  283722 ssh_runner.go:195] Run: systemctl --version
	I1225 19:02:40.229059  283722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:02:40.277374  283722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:02:40.282823  283722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:02:40.282886  283722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:02:40.291041  283722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:02:40.291063  283722 start.go:496] detecting cgroup driver to use...
	I1225 19:02:40.291096  283722 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:02:40.291153  283722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:02:40.306232  283722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:02:40.318913  283722 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:02:40.318974  283722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:02:40.337150  283722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:02:40.353726  283722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:02:40.454327  283722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:02:40.533845  283722 docker.go:234] disabling docker service ...
	I1225 19:02:40.533928  283722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:02:40.548103  283722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:02:40.560536  283722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:02:40.640861  283722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:02:40.723298  283722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:02:40.735960  283722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:02:40.750674  283722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:02:40.750756  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.759687  283722 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:02:40.759744  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.768619  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.776920  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.785326  283722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:02:40.793469  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.802341  283722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.810328  283722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:02:40.819077  283722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:02:40.826284  283722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:02:40.833207  283722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:02:40.910503  283722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:02:41.525244  283722 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:02:41.525315  283722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:02:41.529300  283722 start.go:574] Will wait 60s for crictl version
	I1225 19:02:41.529353  283722 ssh_runner.go:195] Run: which crictl
	I1225 19:02:41.533043  283722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:02:41.557927  283722 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:02:41.558033  283722 ssh_runner.go:195] Run: crio --version
	I1225 19:02:41.586469  283722 ssh_runner.go:195] Run: crio --version
	I1225 19:02:41.615377  283722 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1225 19:02:36.749283  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	W1225 19:02:39.249184  276130 pod_ready.go:104] pod "coredns-5dd5756b68-chdzr" is not "Ready", error: <nil>
	I1225 19:02:40.250399  276130 pod_ready.go:94] pod "coredns-5dd5756b68-chdzr" is "Ready"
	I1225 19:02:40.250430  276130 pod_ready.go:86] duration metric: took 38.007925686s for pod "coredns-5dd5756b68-chdzr" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.254322  276130 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.260487  276130 pod_ready.go:94] pod "etcd-old-k8s-version-163446" is "Ready"
	I1225 19:02:40.260516  276130 pod_ready.go:86] duration metric: took 6.166204ms for pod "etcd-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.264076  276130 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.268500  276130 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-163446" is "Ready"
	I1225 19:02:40.268521  276130 pod_ready.go:86] duration metric: took 4.418592ms for pod "kube-apiserver-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.271820  276130 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.445948  276130 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-163446" is "Ready"
	I1225 19:02:40.445979  276130 pod_ready.go:86] duration metric: took 174.135469ms for pod "kube-controller-manager-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:40.646252  276130 pod_ready.go:83] waiting for pod "kube-proxy-mxztf" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.046220  276130 pod_ready.go:94] pod "kube-proxy-mxztf" is "Ready"
	I1225 19:02:41.046246  276130 pod_ready.go:86] duration metric: took 399.972902ms for pod "kube-proxy-mxztf" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.246867  276130 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.646838  276130 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-163446" is "Ready"
	I1225 19:02:41.646872  276130 pod_ready.go:86] duration metric: took 399.980482ms for pod "kube-scheduler-old-k8s-version-163446" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:02:41.646890  276130 pod_ready.go:40] duration metric: took 39.408421641s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:41.696334  276130 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1225 19:02:41.697651  276130 out.go:203] 
	W1225 19:02:41.698979  276130 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1225 19:02:41.700133  276130 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1225 19:02:41.701308  276130 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-163446" cluster and "default" namespace by default
	W1225 19:02:38.760252  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:41.251779  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:39.959075  260034 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.580193996s)
	W1225 19:02:39.959128  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:50714->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:50714->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1225 19:02:39.959139  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:39.959177  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:40.005485  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:40.005519  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:40.043514  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:40.043550  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:40.079879  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:40.079928  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:40.115370  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:40.115405  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:40.201145  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:40.201185  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:40.235170  260034 logs.go:123] Gathering logs for kube-apiserver [44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23] ...
	I1225 19:02:40.235198  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23"
	W1225 19:02:40.266051  260034 logs.go:130] failed kube-apiserver [44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23": Process exited with status 1
	stdout:
	
	stderr:
	E1225 19:02:40.263563    1990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist" containerID="44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23"
	time="2025-12-25T19:02:40Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1225 19:02:40.263563    1990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist" containerID="44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23"
	time="2025-12-25T19:02:40Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23\": container with ID starting with 44bf55c88d993f166c60ad4a0fedbaf734b561325d62b7a89b7297226b36cf23 not found: ID does not exist"
	
	** /stderr **
	I1225 19:02:40.266070  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:40.266081  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:40.314117  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:40.314147  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:41.616763  283722 cli_runner.go:164] Run: docker network inspect embed-certs-684693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:02:41.634843  283722 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1225 19:02:41.638869  283722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:02:41.650549  283722 kubeadm.go:884] updating cluster {Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:02:41.650690  283722 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:02:41.650753  283722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:02:41.688485  283722 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:02:41.688505  283722 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:02:41.688547  283722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:02:41.714798  283722 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:02:41.714820  283722 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:02:41.714835  283722 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1225 19:02:41.714964  283722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-684693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:02:41.715039  283722 ssh_runner.go:195] Run: crio config
	I1225 19:02:41.768162  283722 cni.go:84] Creating CNI manager for ""
	I1225 19:02:41.768186  283722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:02:41.768201  283722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:02:41.768275  283722 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-684693 NodeName:embed-certs-684693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:02:41.768465  283722 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-684693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:02:41.768549  283722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:02:41.777579  283722 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:02:41.777639  283722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:02:41.786430  283722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1225 19:02:41.799614  283722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:02:41.813093  283722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1225 19:02:41.827054  283722 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:02:41.830776  283722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:02:41.841065  283722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:02:41.923904  283722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:02:41.951478  283722 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693 for IP: 192.168.76.2
	I1225 19:02:41.951499  283722 certs.go:195] generating shared ca certs ...
	I1225 19:02:41.951517  283722 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:41.951691  283722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:02:41.951758  283722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:02:41.951770  283722 certs.go:257] generating profile certs ...
	I1225 19:02:41.951883  283722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/client.key
	I1225 19:02:41.951982  283722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/apiserver.key.7d2dd373
	I1225 19:02:41.952032  283722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/proxy-client.key
	I1225 19:02:41.952168  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:02:41.952213  283722 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:02:41.952225  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:02:41.952259  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:02:41.952296  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:02:41.952329  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:02:41.952390  283722 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:02:41.954169  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:02:41.976087  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:02:42.004209  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:02:42.026214  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:02:42.051456  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1225 19:02:42.070827  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1225 19:02:42.090229  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:02:42.107431  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/embed-certs-684693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1225 19:02:42.124629  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:02:42.142144  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:02:42.159938  283722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:02:42.177820  283722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:02:42.190192  283722 ssh_runner.go:195] Run: openssl version
	I1225 19:02:42.196349  283722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.204337  283722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:02:42.211812  283722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.215821  283722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.215879  283722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:02:42.252652  283722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:02:42.260755  283722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.268248  283722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:02:42.275834  283722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.279512  283722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.279566  283722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:02:42.314793  283722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:02:42.322677  283722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.330378  283722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:02:42.338291  283722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.342039  283722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.342086  283722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:02:42.376088  283722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:02:42.383483  283722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:02:42.387192  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:02:42.421140  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:02:42.455674  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:02:42.504470  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:02:42.549706  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:02:42.598462  283722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:02:42.645140  283722 kubeadm.go:401] StartCluster: {Name:embed-certs-684693 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-684693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:02:42.645223  283722 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:02:42.645296  283722 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:02:42.677285  283722 cri.go:96] found id: "8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818"
	I1225 19:02:42.677315  283722 cri.go:96] found id: "8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d"
	I1225 19:02:42.677322  283722 cri.go:96] found id: "f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238"
	I1225 19:02:42.677327  283722 cri.go:96] found id: "96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d"
	I1225 19:02:42.677331  283722 cri.go:96] found id: ""
	I1225 19:02:42.677390  283722 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:02:42.691054  283722 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:02:42Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:02:42.691135  283722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:02:42.699848  283722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:02:42.699868  283722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:02:42.699928  283722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:02:42.707825  283722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:02:42.708605  283722 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-684693" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:02:42.709197  283722 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-684693" cluster setting kubeconfig missing "embed-certs-684693" context setting]
	I1225 19:02:42.709842  283722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:42.711591  283722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:02:42.719379  283722 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1225 19:02:42.719403  283722 kubeadm.go:602] duration metric: took 19.530409ms to restartPrimaryControlPlane
	I1225 19:02:42.719411  283722 kubeadm.go:403] duration metric: took 74.282356ms to StartCluster
	I1225 19:02:42.719426  283722 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:42.719492  283722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:02:42.721450  283722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:02:42.721725  283722 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:02:42.721786  283722 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:02:42.721909  283722 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-684693"
	I1225 19:02:42.721939  283722 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-684693"
	W1225 19:02:42.721950  283722 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:02:42.721980  283722 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:02:42.722007  283722 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:02:42.722066  283722 addons.go:70] Setting dashboard=true in profile "embed-certs-684693"
	I1225 19:02:42.722083  283722 addons.go:239] Setting addon dashboard=true in "embed-certs-684693"
	W1225 19:02:42.722090  283722 addons.go:248] addon dashboard should already be in state true
	I1225 19:02:42.722116  283722 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:02:42.722510  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.722579  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.722672  283722 addons.go:70] Setting default-storageclass=true in profile "embed-certs-684693"
	I1225 19:02:42.722694  283722 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-684693"
	I1225 19:02:42.722988  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.723522  283722 out.go:179] * Verifying Kubernetes components...
	I1225 19:02:42.724623  283722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:02:42.748101  283722 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:02:42.748216  283722 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:02:42.749489  283722 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:02:42.749509  283722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:02:42.749606  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:42.750777  283722 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1225 19:02:42.750946  283722 addons.go:239] Setting addon default-storageclass=true in "embed-certs-684693"
	W1225 19:02:42.750969  283722 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:02:42.750996  283722 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:02:42.751539  283722 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:02:42.752863  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:02:42.752881  283722 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:02:42.752966  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:42.787641  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:42.787671  283722 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:02:42.787787  283722 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:02:42.787859  283722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:02:42.790135  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:42.812493  283722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:02:42.890653  283722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:02:42.900287  283722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:02:42.903183  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:02:42.903204  283722 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:02:42.911978  283722 node_ready.go:35] waiting up to 6m0s for node "embed-certs-684693" to be "Ready" ...
	I1225 19:02:42.920472  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:02:42.920498  283722 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:02:42.923373  283722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:02:42.943729  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:02:42.943755  283722 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:02:42.963551  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:02:42.963576  283722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:02:42.982558  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:02:42.982575  283722 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:02:42.999301  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:02:42.999373  283722 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:02:43.016589  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:02:43.016615  283722 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:02:43.033331  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:02:43.033357  283722 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:02:43.049617  283722 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:02:43.049640  283722 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:02:43.063573  283722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:02:44.592329  283722 node_ready.go:49] node "embed-certs-684693" is "Ready"
	I1225 19:02:44.592368  283722 node_ready.go:38] duration metric: took 1.680338472s for node "embed-certs-684693" to be "Ready" ...
	I1225 19:02:44.592387  283722 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:02:44.592444  283722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:02:45.119446  283722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.219122023s)
	I1225 19:02:45.119487  283722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.196089746s)
	I1225 19:02:45.119669  283722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.056061114s)
	I1225 19:02:45.119726  283722 api_server.go:72] duration metric: took 2.397967807s to wait for apiserver process to appear ...
	I1225 19:02:45.119772  283722 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:02:45.119794  283722 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:45.121085  283722 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-684693 addons enable metrics-server
	
	I1225 19:02:45.126537  283722 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:02:45.126577  283722 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:02:45.133051  283722 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1225 19:02:43.252798  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:45.752853  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:42.845373  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:42.845820  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:42.845875  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:42.845996  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:42.879236  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:42.879257  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:42.879264  260034 cri.go:96] found id: ""
	I1225 19:02:42.879271  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:42.879320  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:42.884110  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:42.888950  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:42.889027  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:42.932017  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:42.932040  260034 cri.go:96] found id: ""
	I1225 19:02:42.932057  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:42.932110  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:42.937106  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:42.937170  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:42.973822  260034 cri.go:96] found id: ""
	I1225 19:02:42.973849  260034 logs.go:282] 0 containers: []
	W1225 19:02:42.973859  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:42.973866  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:42.973935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:43.007466  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:43.007489  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:43.007601  260034 cri.go:96] found id: ""
	I1225 19:02:43.007630  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:43.007693  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.012837  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.018073  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:43.018146  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:43.051689  260034 cri.go:96] found id: ""
	I1225 19:02:43.051713  260034 logs.go:282] 0 containers: []
	W1225 19:02:43.051723  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:43.051738  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:43.051843  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:43.085693  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:43.085732  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:43.085738  260034 cri.go:96] found id: ""
	I1225 19:02:43.085747  260034 logs.go:282] 2 containers: [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:43.085920  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.090459  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:43.094950  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:43.095018  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:43.129401  260034 cri.go:96] found id: ""
	I1225 19:02:43.129427  260034 logs.go:282] 0 containers: []
	W1225 19:02:43.129435  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:43.129493  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:43.129555  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:43.160140  260034 cri.go:96] found id: ""
	I1225 19:02:43.160168  260034 logs.go:282] 0 containers: []
	W1225 19:02:43.160181  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:43.160208  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:43.160227  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:43.217277  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:43.217294  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:43.217305  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:43.253875  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:43.253919  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:43.289320  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:43.289347  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:43.360025  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:43.360064  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:43.391698  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:43.391740  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:43.425941  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:43.425974  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:43.451809  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:43.451836  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:43.480244  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:43.480269  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:43.508518  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:43.508546  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:43.537208  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:43.537242  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:43.623054  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:43.623094  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:46.140967  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:46.141334  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:46.141390  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:46.141462  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:46.172790  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:46.172813  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:46.172819  260034 cri.go:96] found id: ""
	I1225 19:02:46.172828  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:46.172889  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.177222  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.181021  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:46.181083  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:46.209343  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:46.209371  260034 cri.go:96] found id: ""
	I1225 19:02:46.209380  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:46.209456  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.213577  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:46.213647  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:46.242055  260034 cri.go:96] found id: ""
	I1225 19:02:46.242081  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.242092  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:46.242100  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:46.242163  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:46.271150  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:46.271182  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:46.271189  260034 cri.go:96] found id: ""
	I1225 19:02:46.271200  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:46.271265  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.275579  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.279164  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:46.279230  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:46.317615  260034 cri.go:96] found id: ""
	I1225 19:02:46.317639  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.317647  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:46.317655  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:46.317726  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:46.345511  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:46.345532  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:46.345536  260034 cri.go:96] found id: ""
	I1225 19:02:46.345542  260034 logs.go:282] 2 containers: [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:46.345596  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.349615  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:46.353289  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:46.353345  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:46.379347  260034 cri.go:96] found id: ""
	I1225 19:02:46.379378  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.379390  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:46.379398  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:46.379456  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:46.406076  260034 cri.go:96] found id: ""
	I1225 19:02:46.406103  260034 logs.go:282] 0 containers: []
	W1225 19:02:46.406111  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:46.406120  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:46.406130  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:46.419479  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:46.419518  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:46.475425  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:46.475443  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:46.475453  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:46.501954  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:46.501986  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:46.529233  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:46.529264  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:46.578649  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:46.578689  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:46.615466  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:46.615502  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:46.701574  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:46.701605  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:46.735031  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:46.735066  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:46.774453  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:46.774478  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:46.806331  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:46.806357  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:46.834357  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:46.834383  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:45.136321  283722 addons.go:530] duration metric: took 2.414540777s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1225 19:02:45.619971  283722 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:45.624831  283722 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:02:45.624857  283722 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:02:46.119996  283722 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1225 19:02:46.124768  283722 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1225 19:02:46.125737  283722 api_server.go:141] control plane version: v1.34.3
	I1225 19:02:46.125763  283722 api_server.go:131] duration metric: took 1.005983234s to wait for apiserver health ...
	I1225 19:02:46.125773  283722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:02:46.128670  283722 system_pods.go:59] 8 kube-system pods found
	I1225 19:02:46.128705  283722 system_pods.go:61] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:46.128713  283722 system_pods.go:61] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:46.128724  283722 system_pods.go:61] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:02:46.128730  283722 system_pods.go:61] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:46.128736  283722 system_pods.go:61] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:46.128745  283722 system_pods.go:61] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:46.128753  283722 system_pods.go:61] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:46.128758  283722 system_pods.go:61] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Running
	I1225 19:02:46.128767  283722 system_pods.go:74] duration metric: took 2.986964ms to wait for pod list to return data ...
	I1225 19:02:46.128775  283722 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:02:46.130955  283722 default_sa.go:45] found service account: "default"
	I1225 19:02:46.130979  283722 default_sa.go:55] duration metric: took 2.197529ms for default service account to be created ...
	I1225 19:02:46.130986  283722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:02:46.133301  283722 system_pods.go:86] 8 kube-system pods found
	I1225 19:02:46.133324  283722 system_pods.go:89] "coredns-66bc5c9577-n4nqj" [e02de70e-234a-4cf0-93f8-aac03bcce8cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:02:46.133332  283722 system_pods.go:89] "etcd-embed-certs-684693" [3bb05555-eb05-40bb-9547-53154738add7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:02:46.133337  283722 system_pods.go:89] "kindnet-gqdkf" [655254fd-be22-4f04-a504-963b8b3da9f2] Running
	I1225 19:02:46.133347  283722 system_pods.go:89] "kube-apiserver-embed-certs-684693" [9826fbbb-77d2-43da-ae25-4d8e82236b2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:02:46.133361  283722 system_pods.go:89] "kube-controller-manager-embed-certs-684693" [6bedc00f-bd25-44d1-b4c3-0ebb3d35314b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:02:46.133365  283722 system_pods.go:89] "kube-proxy-wzb26" [28372ff8-2832-49c8-b4ca-883af4201def] Running
	I1225 19:02:46.133370  283722 system_pods.go:89] "kube-scheduler-embed-certs-684693" [8cd9903e-f2f3-4efb-b85b-71ae600ce907] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:02:46.133373  283722 system_pods.go:89] "storage-provisioner" [7ee71ac9-a69c-4669-b8f2-a60dc3dac91f] Running
	I1225 19:02:46.133380  283722 system_pods.go:126] duration metric: took 2.389428ms to wait for k8s-apps to be running ...
	I1225 19:02:46.133386  283722 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:02:46.133426  283722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:02:46.147334  283722 system_svc.go:56] duration metric: took 13.940563ms WaitForService to wait for kubelet
	I1225 19:02:46.147364  283722 kubeadm.go:587] duration metric: took 3.425608177s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:02:46.147386  283722 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:02:46.150394  283722 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:02:46.150422  283722 node_conditions.go:123] node cpu capacity is 8
	I1225 19:02:46.150438  283722 node_conditions.go:105] duration metric: took 3.045786ms to run NodePressure ...
	I1225 19:02:46.150455  283722 start.go:242] waiting for startup goroutines ...
	I1225 19:02:46.150468  283722 start.go:247] waiting for cluster config update ...
	I1225 19:02:46.150484  283722 start.go:256] writing updated cluster config ...
	I1225 19:02:46.150769  283722 ssh_runner.go:195] Run: rm -f paused
	I1225 19:02:46.154707  283722 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:02:46.158471  283722 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n4nqj" in "kube-system" namespace to be "Ready" or be gone ...
	W1225 19:02:48.166327  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:02:48.251567  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:50.253392  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:49.361398  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:49.361870  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:49.361956  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:49.362018  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:49.398444  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:49.398471  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:49.398477  260034 cri.go:96] found id: ""
	I1225 19:02:49.398487  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:49.398560  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.403776  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.409053  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:49.409117  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:49.443463  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:49.443492  260034 cri.go:96] found id: ""
	I1225 19:02:49.443502  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:49.443561  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.448740  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:49.448807  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:49.483163  260034 cri.go:96] found id: ""
	I1225 19:02:49.483191  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.483203  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:49.483210  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:49.483270  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:49.523558  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:49.523583  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:49.523589  260034 cri.go:96] found id: ""
	I1225 19:02:49.523599  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:49.523656  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.529651  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.534440  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:49.534514  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:49.569461  260034 cri.go:96] found id: ""
	I1225 19:02:49.569487  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.569498  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:49.569505  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:49.569582  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:49.603791  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:49.603818  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:49.603824  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:49.603833  260034 cri.go:96] found id: ""
	I1225 19:02:49.603842  260034 logs.go:282] 3 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:49.603913  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.608932  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.613461  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:49.618371  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:49.618474  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:49.652528  260034 cri.go:96] found id: ""
	I1225 19:02:49.652562  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.652573  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:49.652580  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:49.652640  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:49.690860  260034 cri.go:96] found id: ""
	I1225 19:02:49.690888  260034 logs.go:282] 0 containers: []
	W1225 19:02:49.690911  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:49.690923  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:49.690937  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:49.758816  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:49.758859  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:49.795978  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:02:49.796019  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:49.831406  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:49.831438  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:49.872802  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:49.872838  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:49.953150  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:49.953176  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:49.953190  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:49.982648  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:49.982682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:50.009816  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:50.009841  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:50.035707  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:50.035736  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:50.093790  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:50.093827  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:50.194464  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:50.194502  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:50.212181  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:50.212212  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:50.248646  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:50.248682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	W1225 19:02:50.664148  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:02:52.664384  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:02:52.752844  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:55.251604  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:02:57.252269  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:02:52.795310  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:52.795757  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:52.795818  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:52.795942  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:52.831965  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:52.831989  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:52.831995  260034 cri.go:96] found id: ""
	I1225 19:02:52.832005  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:52.832060  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:52.837553  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:52.842574  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:52.842642  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:52.880437  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:52.880461  260034 cri.go:96] found id: ""
	I1225 19:02:52.880472  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:52.880537  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:52.885847  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:52.885935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:52.921304  260034 cri.go:96] found id: ""
	I1225 19:02:52.921332  260034 logs.go:282] 0 containers: []
	W1225 19:02:52.921342  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:52.921349  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:52.921406  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:52.957316  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:52.957334  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:52.957338  260034 cri.go:96] found id: ""
	I1225 19:02:52.957345  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:52.957393  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:52.962391  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:52.967336  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:52.967398  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:53.004000  260034 cri.go:96] found id: ""
	I1225 19:02:53.004029  260034 logs.go:282] 0 containers: []
	W1225 19:02:53.004040  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:53.004048  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:53.004106  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:53.039704  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:53.039730  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:53.039737  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:53.039741  260034 cri.go:96] found id: ""
	I1225 19:02:53.039752  260034 logs.go:282] 3 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:53.039819  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:53.045028  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:53.049479  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:53.053413  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:53.053477  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:53.087038  260034 cri.go:96] found id: ""
	I1225 19:02:53.087065  260034 logs.go:282] 0 containers: []
	W1225 19:02:53.087077  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:53.087085  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:53.087168  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:53.127883  260034 cri.go:96] found id: ""
	I1225 19:02:53.127944  260034 logs.go:282] 0 containers: []
	W1225 19:02:53.127956  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:53.127967  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:53.127980  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:53.166469  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:53.166505  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:53.201700  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:53.201733  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:53.286215  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:53.286245  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:53.286262  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:53.323782  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:53.323819  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:53.362654  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:53.362682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:53.433805  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:53.433856  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:53.478004  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:53.478064  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:53.570531  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:53.570574  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:02:53.589262  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:53.589304  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:53.627773  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:53.627799  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:53.667400  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:53.667429  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:53.713404  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:02:53.713440  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:56.244946  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:02:56.245331  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:02:56.245386  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:02:56.245435  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:02:56.274279  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:56.274299  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:56.274305  260034 cri.go:96] found id: ""
	I1225 19:02:56.274314  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:02:56.274364  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.278157  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.281744  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:02:56.281792  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:02:56.313857  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:56.313883  260034 cri.go:96] found id: ""
	I1225 19:02:56.313904  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:02:56.313961  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.318340  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:02:56.318394  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:02:56.345611  260034 cri.go:96] found id: ""
	I1225 19:02:56.345637  260034 logs.go:282] 0 containers: []
	W1225 19:02:56.345647  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:02:56.345654  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:02:56.345715  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:02:56.371521  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:56.371546  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:56.371552  260034 cri.go:96] found id: ""
	I1225 19:02:56.371562  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:02:56.371622  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.375277  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.378876  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:02:56.378975  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:02:56.406017  260034 cri.go:96] found id: ""
	I1225 19:02:56.406043  260034 logs.go:282] 0 containers: []
	W1225 19:02:56.406054  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:02:56.406061  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:02:56.406113  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:02:56.436058  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:56.436083  260034 cri.go:96] found id: "192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:56.436089  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:56.436095  260034 cri.go:96] found id: ""
	I1225 19:02:56.436106  260034 logs.go:282] 3 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:02:56.436193  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.440297  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.444082  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:02:56.447543  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:02:56.447586  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:02:56.478685  260034 cri.go:96] found id: ""
	I1225 19:02:56.478711  260034 logs.go:282] 0 containers: []
	W1225 19:02:56.478721  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:02:56.478728  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:02:56.478775  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:02:56.507430  260034 cri.go:96] found id: ""
	I1225 19:02:56.507455  260034 logs.go:282] 0 containers: []
	W1225 19:02:56.507467  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:02:56.507479  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:02:56.507496  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:02:56.535665  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:02:56.535687  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:02:56.564333  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:02:56.564359  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:02:56.595785  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:02:56.595813  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:02:56.651925  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:02:56.651958  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:02:56.651969  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:02:56.684075  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:02:56.684106  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:02:56.723930  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:02:56.723955  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:02:56.760118  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:02:56.760152  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:02:56.788336  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:02:56.788365  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:02:56.815557  260034 logs.go:123] Gathering logs for kube-controller-manager [192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d] ...
	I1225 19:02:56.815592  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 192a48996072199570ad945ab3e2532fd7d3abac911bd3ba86671ed5f662855d"
	I1225 19:02:56.844611  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:02:56.844640  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:02:56.898818  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:02:56.898848  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:02:56.982620  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:02:56.982643  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Dec 25 19:02:19 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:19.256721597Z" level=info msg="Started container" PID=1742 containerID=dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper id=e9bbab68-2d64-4c5d-bbad-e4f828d26e14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed31591ef3633165d9da7dd0fa0d1effb0c331079a542e6130150f8162e5e5f2
	Dec 25 19:02:20 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:20.217938847Z" level=info msg="Removing container: 853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854" id=3e3cdde9-f1c4-4b34-b795-25b09a398b03 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:20 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:20.227497073Z" level=info msg="Removed container 853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=3e3cdde9-f1c4-4b34-b795-25b09a398b03 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.248019848Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0cc7bcb8-4571-4f92-abbf-b751d5c22d37 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.249405499Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=624d1249-c99d-4893-8b6d-6c0f4d440cd0 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.251831348Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=938c57b0-1948-46df-bc54-f2faaba880de name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.252121384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258240184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258430496Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/db30fdad163134de9ff6722eebee77220d016321b035e760269a5032e93db16b/merged/etc/passwd: no such file or directory"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258460372Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/db30fdad163134de9ff6722eebee77220d016321b035e760269a5032e93db16b/merged/etc/group: no such file or directory"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.258765586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.288656922Z" level=info msg="Created container 4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b: kube-system/storage-provisioner/storage-provisioner" id=938c57b0-1948-46df-bc54-f2faaba880de name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.289280063Z" level=info msg="Starting container: 4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b" id=26da47cf-8fe6-4c01-ae0a-093986da1327 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:02:32 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:32.291377518Z" level=info msg="Started container" PID=1758 containerID=4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b description=kube-system/storage-provisioner/storage-provisioner id=26da47cf-8fe6-4c01-ae0a-093986da1327 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c99ce6c9f6e774697cb76b2f90f3cfc96a5f6e7a8235ee1d45e10a318861c6aa
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.143820902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=273bfeb9-40e4-4a3f-87b6-c2fe80d6ac8f name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.144767902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=591c4535-969a-4e7f-b8ab-0c60981929b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.145759577Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=fe944e54-5c3a-4709-97d2-eef871920404 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.145890862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.151661702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.152180243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.185986808Z" level=info msg="Created container ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=fe944e54-5c3a-4709-97d2-eef871920404 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.186631517Z" level=info msg="Starting container: ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533" id=598fd0c0-839f-41a5-887f-a31e1c29a3b0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.188498093Z" level=info msg="Started container" PID=1774 containerID=ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper id=598fd0c0-839f-41a5-887f-a31e1c29a3b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed31591ef3633165d9da7dd0fa0d1effb0c331079a542e6130150f8162e5e5f2
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.263421261Z" level=info msg="Removing container: dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88" id=d6bb0370-eb52-4cbb-9712-0809cd0c1a50 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:37 old-k8s-version-163446 crio[570]: time="2025-12-25T19:02:37.274268031Z" level=info msg="Removed container dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k/dashboard-metrics-scraper" id=d6bb0370-eb52-4cbb-9712-0809cd0c1a50 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ea767d69b5c8b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   ed31591ef3633       dashboard-metrics-scraper-5f989dc9cf-7fb8k       kubernetes-dashboard
	4ce1005c7b592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   c99ce6c9f6e77       storage-provisioner                              kube-system
	e37efd9b2c0f4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago       Running             kubernetes-dashboard        0                   681ad6f331185       kubernetes-dashboard-8694d4445c-9sffb            kubernetes-dashboard
	ccffe0a749709       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   640c3d286e54a       coredns-5dd5756b68-chdzr                         kube-system
	6f9ee785f7e06       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   057743e8e2dbd       busybox                                          default
	d25ed4ed70040       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   c99ce6c9f6e77       storage-provisioner                              kube-system
	511e075a73b01       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   39f9bd9a7f7b5       kindnet-krjfj                                    kube-system
	376a01fa2f5cd       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   e5e0661513f15       kube-proxy-mxztf                                 kube-system
	b4b49a940b58f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   33211a7ded48e       kube-apiserver-old-k8s-version-163446            kube-system
	739051af3cadd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   ae42aaf9c0479       etcd-old-k8s-version-163446                      kube-system
	c1c1926bfed12       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   95be73706a6b2       kube-controller-manager-old-k8s-version-163446   kube-system
	b66569b95e263       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   c537bdd069a78       kube-scheduler-old-k8s-version-163446            kube-system
	
	
	==> coredns [ccffe0a74970948877693b5a337809301f8eb0c24483e7ad98ec3964e8a6ee9d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33306 - 28751 "HINFO IN 5159646874572025505.42273120866640963. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.037363138s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-163446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-163446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=old-k8s-version-163446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_00_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-163446
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:00:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:02:31 +0000   Thu, 25 Dec 2025 19:01:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-163446
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                0cc28420-dcfc-4f7d-abe6-5c56c5c91736
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-chdzr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-163446                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-krjfj                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-163446             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-163446    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-mxztf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-163446             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7fb8k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-9sffb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-163446 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-163446 event: Registered Node old-k8s-version-163446 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-163446 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x9 over 60s)  kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-163446 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x7 over 60s)  kubelet          Node old-k8s-version-163446 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-163446 event: Registered Node old-k8s-version-163446 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [739051af3caddbf4be898cc7e7f82a012b1edd3b32b01e120d48d8420bf77f67] <==
	{"level":"info","ts":"2025-12-25T19:01:58.706442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-25T19:01:58.706585Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:01:58.706695Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-25T19:01:58.706801Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:01:58.70683Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-25T19:01:58.7066Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:01:58.708055Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-25T19:01:58.708164Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-25T19:01:58.708205Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-25T19:01:58.708399Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-25T19:01:58.708429Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-25T19:01:59.798077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:59.798117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:59.798164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-25T19:01:59.798178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.798183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.79821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.798217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-25T19:01:59.799675Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:01:59.799711Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:01:59.799666Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-163446 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:01:59.799875Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:01:59.799913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:01:59.800883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-25T19:01:59.800889Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:02:58 up 45 min,  0 user,  load average: 2.52, 2.40, 1.76
	Linux old-k8s-version-163446 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511e075a73b0123446e15801390ee877057b17d9055b6b3110d706ac86692627] <==
	I1225 19:02:01.665888       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:02:01.666157       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1225 19:02:01.666339       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:02:01.666363       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:02:01.666387       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:02:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:02:01.959646       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:02:01.959826       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:02:01.959984       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:02:02.059679       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:02:02.459888       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:02:02.459946       1 metrics.go:72] Registering metrics
	I1225 19:02:02.460029       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:11.868042       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:11.868111       1 main.go:301] handling current node
	I1225 19:02:21.868612       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:21.868668       1 main.go:301] handling current node
	I1225 19:02:31.867984       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:31.868037       1 main.go:301] handling current node
	I1225 19:02:41.869023       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:41.869314       1 main.go:301] handling current node
	I1225 19:02:51.869094       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:02:51.869159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b4b49a940b58f765b0e9b7ce25aea04517e3af0b3e9f3d8cb36a460d92e868f4] <==
	I1225 19:02:00.746662       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1225 19:02:00.784726       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1225 19:02:00.784761       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1225 19:02:00.784771       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1225 19:02:00.784782       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1225 19:02:00.784802       1 aggregator.go:166] initial CRD sync complete...
	I1225 19:02:00.784815       1 autoregister_controller.go:141] Starting autoregister controller
	I1225 19:02:00.784820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:02:00.784738       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1225 19:02:00.784832       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:02:00.784741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:02:00.788440       1 shared_informer.go:318] Caches are synced for configmaps
	E1225 19:02:00.790340       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1225 19:02:01.567091       1 controller.go:624] quota admission added evaluator for: namespaces
	I1225 19:02:01.601207       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1225 19:02:01.617541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:02:01.626660       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:02:01.634431       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1225 19:02:01.668243       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.102.80"}
	I1225 19:02:01.682927       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.13.17"}
	I1225 19:02:01.683283       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:02:13.415502       1 controller.go:624] quota admission added evaluator for: endpoints
	I1225 19:02:13.415547       1 controller.go:624] quota admission added evaluator for: endpoints
	I1225 19:02:13.416127       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:02:13.440037       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c1c1926bfed12740e7d65b2cd81a01a86dd6a1887ce4e9b9fc5fd2fa5d9e0552] <==
	I1225 19:02:13.466874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.877069ms"
	I1225 19:02:13.467663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="24.035516ms"
	I1225 19:02:13.474623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.674579ms"
	I1225 19:02:13.474622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.908763ms"
	I1225 19:02:13.474794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.414µs"
	I1225 19:02:13.474800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.481µs"
	I1225 19:02:13.479875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.278µs"
	I1225 19:02:13.488602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.529µs"
	I1225 19:02:13.520027       1 shared_informer.go:318] Caches are synced for disruption
	I1225 19:02:13.555513       1 shared_informer.go:318] Caches are synced for crt configmap
	I1225 19:02:13.573840       1 shared_informer.go:318] Caches are synced for resource quota
	I1225 19:02:13.625873       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1225 19:02:13.637987       1 shared_informer.go:318] Caches are synced for resource quota
	I1225 19:02:13.951237       1 shared_informer.go:318] Caches are synced for garbage collector
	I1225 19:02:13.952352       1 shared_informer.go:318] Caches are synced for garbage collector
	I1225 19:02:13.952382       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1225 19:02:17.230432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.646371ms"
	I1225 19:02:17.230517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.364µs"
	I1225 19:02:19.226293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.326µs"
	I1225 19:02:20.227969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.416µs"
	I1225 19:02:21.230010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.55µs"
	I1225 19:02:37.274639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.544µs"
	I1225 19:02:39.925668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.180442ms"
	I1225 19:02:39.925795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.825µs"
	I1225 19:02:43.779869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.76µs"
	
	
	==> kube-proxy [376a01fa2f5cd87c0dae38ad74332c0ae0c0d93fa441f19a90ff655c9ac8f482] <==
	I1225 19:02:01.539246       1 server_others.go:69] "Using iptables proxy"
	I1225 19:02:01.550139       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1225 19:02:01.569877       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:02:01.572672       1 server_others.go:152] "Using iptables Proxier"
	I1225 19:02:01.572703       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1225 19:02:01.572710       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1225 19:02:01.572733       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1225 19:02:01.572950       1 server.go:846] "Version info" version="v1.28.0"
	I1225 19:02:01.572964       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:01.573566       1 config.go:97] "Starting endpoint slice config controller"
	I1225 19:02:01.573605       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1225 19:02:01.573634       1 config.go:188] "Starting service config controller"
	I1225 19:02:01.573647       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1225 19:02:01.573845       1 config.go:315] "Starting node config controller"
	I1225 19:02:01.573861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1225 19:02:01.673707       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1225 19:02:01.673731       1 shared_informer.go:318] Caches are synced for service config
	I1225 19:02:01.673941       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b66569b95e263d0c33bf3838b444600f919279c26935aa24c1bd52a5a645a4dd] <==
	I1225 19:01:59.029075       1 serving.go:348] Generated self-signed cert in-memory
	I1225 19:02:00.744373       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1225 19:02:00.744396       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:00.747792       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1225 19:02:00.747816       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1225 19:02:00.747820       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:00.747839       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 19:02:00.747942       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:00.747979       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1225 19:02:00.749685       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1225 19:02:00.749814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1225 19:02:00.848214       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1225 19:02:00.848244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1225 19:02:00.848246       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.464545     733 topology_manager.go:215] "Topology Admit Handler" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-7fb8k"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587090     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qwzj\" (UniqueName: \"kubernetes.io/projected/8670172f-1b60-424f-b7a5-cf89fb165120-kube-api-access-7qwzj\") pod \"kubernetes-dashboard-8694d4445c-9sffb\" (UID: \"8670172f-1b60-424f-b7a5-cf89fb165120\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9sffb"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587174     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8670172f-1b60-424f-b7a5-cf89fb165120-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-9sffb\" (UID: \"8670172f-1b60-424f-b7a5-cf89fb165120\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9sffb"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587236     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbbf\" (UniqueName: \"kubernetes.io/projected/38c48988-6be8-47e5-a66b-4c0f3bc3dbea-kube-api-access-8lbbf\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fb8k\" (UID: \"38c48988-6be8-47e5-a66b-4c0f3bc3dbea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k"
	Dec 25 19:02:13 old-k8s-version-163446 kubelet[733]: I1225 19:02:13.587323     733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/38c48988-6be8-47e5-a66b-4c0f3bc3dbea-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7fb8k\" (UID: \"38c48988-6be8-47e5-a66b-4c0f3bc3dbea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k"
	Dec 25 19:02:19 old-k8s-version-163446 kubelet[733]: I1225 19:02:19.213359     733 scope.go:117] "RemoveContainer" containerID="853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854"
	Dec 25 19:02:19 old-k8s-version-163446 kubelet[733]: I1225 19:02:19.226498     733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-9sffb" podStartSLOduration=3.328246562 podCreationTimestamp="2025-12-25 19:02:13 +0000 UTC" firstStartedPulling="2025-12-25 19:02:13.790234842 +0000 UTC m=+15.736993272" lastFinishedPulling="2025-12-25 19:02:16.68844283 +0000 UTC m=+18.635201265" observedRunningTime="2025-12-25 19:02:17.220581012 +0000 UTC m=+19.167339471" watchObservedRunningTime="2025-12-25 19:02:19.226454555 +0000 UTC m=+21.173212991"
	Dec 25 19:02:20 old-k8s-version-163446 kubelet[733]: I1225 19:02:20.216751     733 scope.go:117] "RemoveContainer" containerID="853e8f275cae406ddd405e3d4d78490cafcf6ed513368d7188a4af3283985854"
	Dec 25 19:02:20 old-k8s-version-163446 kubelet[733]: I1225 19:02:20.216961     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:20 old-k8s-version-163446 kubelet[733]: E1225 19:02:20.217325     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:21 old-k8s-version-163446 kubelet[733]: I1225 19:02:21.220469     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:21 old-k8s-version-163446 kubelet[733]: E1225 19:02:21.220840     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:23 old-k8s-version-163446 kubelet[733]: I1225 19:02:23.767502     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:23 old-k8s-version-163446 kubelet[733]: E1225 19:02:23.767827     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:32 old-k8s-version-163446 kubelet[733]: I1225 19:02:32.247484     733 scope.go:117] "RemoveContainer" containerID="d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: I1225 19:02:37.143267     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: I1225 19:02:37.262133     733 scope.go:117] "RemoveContainer" containerID="dff3ea337b3b7f2eebc6a1b8971f3ad7f561f9d71c246254414b5857c2e68e88"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: I1225 19:02:37.262400     733 scope.go:117] "RemoveContainer" containerID="ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	Dec 25 19:02:37 old-k8s-version-163446 kubelet[733]: E1225 19:02:37.262792     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:43 old-k8s-version-163446 kubelet[733]: I1225 19:02:43.766887     733 scope.go:117] "RemoveContainer" containerID="ea767d69b5c8b7ce73aad86ce46fdf6f6047c47c581f8fb1f16f896ca43c1533"
	Dec 25 19:02:43 old-k8s-version-163446 kubelet[733]: E1225 19:02:43.767324     733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7fb8k_kubernetes-dashboard(38c48988-6be8-47e5-a66b-4c0f3bc3dbea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7fb8k" podUID="38c48988-6be8-47e5-a66b-4c0f3bc3dbea"
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:02:54 old-k8s-version-163446 systemd[1]: kubelet.service: Consumed 1.546s CPU time.
	
	
	==> kubernetes-dashboard [e37efd9b2c0f4e3339db38b105725fe701ef12b037a5a8d35c075b3f754150c7] <==
	2025/12/25 19:02:16 Using namespace: kubernetes-dashboard
	2025/12/25 19:02:16 Using in-cluster config to connect to apiserver
	2025/12/25 19:02:16 Using secret token for csrf signing
	2025/12/25 19:02:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:02:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:02:16 Successful initial request to the apiserver, version: v1.28.0
	2025/12/25 19:02:16 Generating JWE encryption key
	2025/12/25 19:02:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:02:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:02:16 Initializing JWE encryption key from synchronized object
	2025/12/25 19:02:16 Creating in-cluster Sidecar client
	2025/12/25 19:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:16 Serving insecurely on HTTP port: 9090
	2025/12/25 19:02:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:16 Starting overwatch
	
	
	==> storage-provisioner [4ce1005c7b5926eec1ae94602837760de0b75dfa3656524847d215328c75ac0b] <==
	I1225 19:02:32.304668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:02:32.314203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:02:32.314245       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1225 19:02:49.715173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:02:49.715284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f853f802-d45c-4cc9-a8ea-2b9b3cbed157", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-163446_61d5a32b-67aa-4448-8cf1-69ec15ea9eac became leader
	I1225 19:02:49.715361       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-163446_61d5a32b-67aa-4448-8cf1-69ec15ea9eac!
	I1225 19:02:49.815643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-163446_61d5a32b-67aa-4448-8cf1-69ec15ea9eac!
	
	
	==> storage-provisioner [d25ed4ed70040fac28d88caa14abd75d2a95994c5887f5143d7fa3e7f5b52c82] <==
	I1225 19:02:01.520724       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:02:31.525242       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163446 -n old-k8s-version-163446
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163446 -n old-k8s-version-163446: exit status 2 (346.825284ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-163446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-148352 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-148352 --alsologtostderr -v=1: exit status 80 (2.290170414s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-148352 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 19:03:23.446470  293603 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:23.446592  293603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:23.446600  293603 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:23.446607  293603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:23.446831  293603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:23.447153  293603 out.go:368] Setting JSON to false
	I1225 19:03:23.447175  293603 mustload.go:66] Loading cluster: no-preload-148352
	I1225 19:03:23.447584  293603 config.go:182] Loaded profile config "no-preload-148352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:23.448152  293603 cli_runner.go:164] Run: docker container inspect no-preload-148352 --format={{.State.Status}}
	I1225 19:03:23.468299  293603 host.go:66] Checking if "no-preload-148352" exists ...
	I1225 19:03:23.468631  293603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:23.526649  293603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-25 19:03:23.516483586 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:23.527287  293603 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766570787-22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766570787-22316-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-148352 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1225 19:03:23.529686  293603 out.go:179] * Pausing node no-preload-148352 ... 
	I1225 19:03:23.530815  293603 host.go:66] Checking if "no-preload-148352" exists ...
	I1225 19:03:23.531106  293603 ssh_runner.go:195] Run: systemctl --version
	I1225 19:03:23.531156  293603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-148352
	I1225 19:03:23.549267  293603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/no-preload-148352/id_rsa Username:docker}
	I1225 19:03:23.638218  293603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:23.668426  293603 pause.go:52] kubelet running: true
	I1225 19:03:23.668495  293603 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:23.838771  293603 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:23.838866  293603 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:23.904592  293603 cri.go:96] found id: "a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43"
	I1225 19:03:23.904611  293603 cri.go:96] found id: "cd48b0389f0865406b664205dcf7168f2c40b064af72c3b306f1eaf26e9b9128"
	I1225 19:03:23.904615  293603 cri.go:96] found id: "e3e24c594c2e90a5b96c7c7292be2263392feb5e70d00b1ec00eb84d2a0fbf17"
	I1225 19:03:23.904619  293603 cri.go:96] found id: "cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc"
	I1225 19:03:23.904621  293603 cri.go:96] found id: "55f12125d0d2e0b7f466cdebd8a8770b9c7062b5f540d2dcaf8cca748d880059"
	I1225 19:03:23.904625  293603 cri.go:96] found id: "2f3a4cbe6949d2645c6993b4cc7109abf638d7d4a738d0209ae98d0d57e87c1b"
	I1225 19:03:23.904627  293603 cri.go:96] found id: "bb2011f8a39109b797fb7b1bf01cff317738a18c03f9c14941817a74f2e323b6"
	I1225 19:03:23.904630  293603 cri.go:96] found id: "47366819032b30036912ff5f63dfa944e254928f33476aba04aaf69af88aaf71"
	I1225 19:03:23.904633  293603 cri.go:96] found id: "aa7daa7b6db664c65cb970f6372118ff3edf3e9ed558da28a08f0e134f753051"
	I1225 19:03:23.904642  293603 cri.go:96] found id: "4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	I1225 19:03:23.904646  293603 cri.go:96] found id: "901f76356987e3e596f87ef92b962ce67c143eef3f37a7b4ac37dbde884cecae"
	I1225 19:03:23.904650  293603 cri.go:96] found id: ""
	I1225 19:03:23.904694  293603 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:23.916063  293603 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:23Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:03:24.237565  293603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:24.250219  293603 pause.go:52] kubelet running: false
	I1225 19:03:24.250279  293603 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:24.403627  293603 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:24.403708  293603 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:24.477290  293603 cri.go:96] found id: "a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43"
	I1225 19:03:24.477317  293603 cri.go:96] found id: "cd48b0389f0865406b664205dcf7168f2c40b064af72c3b306f1eaf26e9b9128"
	I1225 19:03:24.477322  293603 cri.go:96] found id: "e3e24c594c2e90a5b96c7c7292be2263392feb5e70d00b1ec00eb84d2a0fbf17"
	I1225 19:03:24.477327  293603 cri.go:96] found id: "cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc"
	I1225 19:03:24.477331  293603 cri.go:96] found id: "55f12125d0d2e0b7f466cdebd8a8770b9c7062b5f540d2dcaf8cca748d880059"
	I1225 19:03:24.477335  293603 cri.go:96] found id: "2f3a4cbe6949d2645c6993b4cc7109abf638d7d4a738d0209ae98d0d57e87c1b"
	I1225 19:03:24.477340  293603 cri.go:96] found id: "bb2011f8a39109b797fb7b1bf01cff317738a18c03f9c14941817a74f2e323b6"
	I1225 19:03:24.477344  293603 cri.go:96] found id: "47366819032b30036912ff5f63dfa944e254928f33476aba04aaf69af88aaf71"
	I1225 19:03:24.477348  293603 cri.go:96] found id: "aa7daa7b6db664c65cb970f6372118ff3edf3e9ed558da28a08f0e134f753051"
	I1225 19:03:24.477366  293603 cri.go:96] found id: "4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	I1225 19:03:24.477375  293603 cri.go:96] found id: "901f76356987e3e596f87ef92b962ce67c143eef3f37a7b4ac37dbde884cecae"
	I1225 19:03:24.477380  293603 cri.go:96] found id: ""
	I1225 19:03:24.477426  293603 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:24.806752  293603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:24.820563  293603 pause.go:52] kubelet running: false
	I1225 19:03:24.820623  293603 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:24.972739  293603 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:24.972835  293603 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:25.045622  293603 cri.go:96] found id: "a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43"
	I1225 19:03:25.045648  293603 cri.go:96] found id: "cd48b0389f0865406b664205dcf7168f2c40b064af72c3b306f1eaf26e9b9128"
	I1225 19:03:25.045655  293603 cri.go:96] found id: "e3e24c594c2e90a5b96c7c7292be2263392feb5e70d00b1ec00eb84d2a0fbf17"
	I1225 19:03:25.045660  293603 cri.go:96] found id: "cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc"
	I1225 19:03:25.045664  293603 cri.go:96] found id: "55f12125d0d2e0b7f466cdebd8a8770b9c7062b5f540d2dcaf8cca748d880059"
	I1225 19:03:25.045670  293603 cri.go:96] found id: "2f3a4cbe6949d2645c6993b4cc7109abf638d7d4a738d0209ae98d0d57e87c1b"
	I1225 19:03:25.045673  293603 cri.go:96] found id: "bb2011f8a39109b797fb7b1bf01cff317738a18c03f9c14941817a74f2e323b6"
	I1225 19:03:25.045677  293603 cri.go:96] found id: "47366819032b30036912ff5f63dfa944e254928f33476aba04aaf69af88aaf71"
	I1225 19:03:25.045681  293603 cri.go:96] found id: "aa7daa7b6db664c65cb970f6372118ff3edf3e9ed558da28a08f0e134f753051"
	I1225 19:03:25.045689  293603 cri.go:96] found id: "4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	I1225 19:03:25.045694  293603 cri.go:96] found id: "901f76356987e3e596f87ef92b962ce67c143eef3f37a7b4ac37dbde884cecae"
	I1225 19:03:25.045699  293603 cri.go:96] found id: ""
	I1225 19:03:25.045752  293603 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:25.399792  293603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:25.412392  293603 pause.go:52] kubelet running: false
	I1225 19:03:25.412472  293603 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:25.576534  293603 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:25.576636  293603 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:25.647181  293603 cri.go:96] found id: "a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43"
	I1225 19:03:25.647206  293603 cri.go:96] found id: "cd48b0389f0865406b664205dcf7168f2c40b064af72c3b306f1eaf26e9b9128"
	I1225 19:03:25.647216  293603 cri.go:96] found id: "e3e24c594c2e90a5b96c7c7292be2263392feb5e70d00b1ec00eb84d2a0fbf17"
	I1225 19:03:25.647221  293603 cri.go:96] found id: "cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc"
	I1225 19:03:25.647225  293603 cri.go:96] found id: "55f12125d0d2e0b7f466cdebd8a8770b9c7062b5f540d2dcaf8cca748d880059"
	I1225 19:03:25.647230  293603 cri.go:96] found id: "2f3a4cbe6949d2645c6993b4cc7109abf638d7d4a738d0209ae98d0d57e87c1b"
	I1225 19:03:25.647234  293603 cri.go:96] found id: "bb2011f8a39109b797fb7b1bf01cff317738a18c03f9c14941817a74f2e323b6"
	I1225 19:03:25.647237  293603 cri.go:96] found id: "47366819032b30036912ff5f63dfa944e254928f33476aba04aaf69af88aaf71"
	I1225 19:03:25.647241  293603 cri.go:96] found id: "aa7daa7b6db664c65cb970f6372118ff3edf3e9ed558da28a08f0e134f753051"
	I1225 19:03:25.647249  293603 cri.go:96] found id: "4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	I1225 19:03:25.647253  293603 cri.go:96] found id: "901f76356987e3e596f87ef92b962ce67c143eef3f37a7b4ac37dbde884cecae"
	I1225 19:03:25.647256  293603 cri.go:96] found id: ""
	I1225 19:03:25.647299  293603 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:25.663090  293603 out.go:203] 
	W1225 19:03:25.664357  293603 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 19:03:25.664384  293603 out.go:285] * 
	* 
	W1225 19:03:25.666210  293603 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 19:03:25.667285  293603 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-148352 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-148352
helpers_test.go:244: (dbg) docker inspect no-preload-148352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc",
	        "Created": "2025-12-25T19:01:06.66476254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:02:22.656378334Z",
	            "FinishedAt": "2025-12-25T19:02:21.402997407Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/hosts",
	        "LogPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc-json.log",
	        "Name": "/no-preload-148352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-148352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-148352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc",
	                "LowerDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-148352",
	                "Source": "/var/lib/docker/volumes/no-preload-148352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-148352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-148352",
	                "name.minikube.sigs.k8s.io": "no-preload-148352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5794287c56d63287f24d27d6403a0481c248bdbbd997eb01b1d0757b39dc7467",
	            "SandboxKey": "/var/run/docker/netns/5794287c56d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-148352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fdcf6cdd30d0ba02321a77fbb55e094d77a371075d285e3dbc5b2c78f7f50f7",
	                    "EndpointID": "ef793fdf1bb46a34f49e79712ce3ef6da23e74ec06ec9d0199b8c4dbd1d47493",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "be:3d:b2:08:fc:e2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-148352",
	                        "41819bf1bd4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352: exit status 2 (446.207765ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-148352 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-148352 logs -n 25: (1.285761148s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                               │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                     │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                     │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                               │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                    │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:03:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:03:03.260659  290541 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:03.260750  290541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:03.260758  290541 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:03.260763  290541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:03.260972  290541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:03.261480  290541 out.go:368] Setting JSON to false
	I1225 19:03:03.262644  290541 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2731,"bootTime":1766686652,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:03:03.262708  290541 start.go:143] virtualization: kvm guest
	I1225 19:03:03.264770  290541 out.go:179] * [default-k8s-diff-port-960022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:03:03.266042  290541 notify.go:221] Checking for updates...
	I1225 19:03:03.266057  290541 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:03:03.267429  290541 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:03:03.269101  290541 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:03.270287  290541 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:03:03.272709  290541 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:03:03.273925  290541 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:03:03.275633  290541 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:03.275725  290541 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:03.275830  290541 config.go:182] Loaded profile config "no-preload-148352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:03.275955  290541 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:03:03.301169  290541 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:03:03.301259  290541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:03.362636  290541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:03.351593327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:03.362744  290541 docker.go:319] overlay module found
	I1225 19:03:03.365026  290541 out.go:179] * Using the docker driver based on user configuration
	I1225 19:03:03.366911  290541 start.go:309] selected driver: docker
	I1225 19:03:03.366928  290541 start.go:928] validating driver "docker" against <nil>
	I1225 19:03:03.366943  290541 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:03:03.367476  290541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:03.425793  290541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:03.416183241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:03.425998  290541 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:03:03.426447  290541 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:03:03.427956  290541 out.go:179] * Using Docker driver with root privileges
	I1225 19:03:03.429194  290541 cni.go:84] Creating CNI manager for ""
	I1225 19:03:03.429263  290541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:03.429275  290541 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:03:03.429344  290541 start.go:353] cluster config:
	{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:03.430731  290541 out.go:179] * Starting "default-k8s-diff-port-960022" primary control-plane node in "default-k8s-diff-port-960022" cluster
	I1225 19:03:03.431854  290541 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:03:03.432973  290541 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:03:03.433946  290541 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:03.433975  290541 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:03:03.433988  290541 cache.go:65] Caching tarball of preloaded images
	I1225 19:03:03.434048  290541 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:03:03.434083  290541 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:03:03.434099  290541 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:03:03.434224  290541 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:03:03.434249  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json: {Name:mk23e95983e818b85162d68edd988fdf930d6200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:03.455337  290541 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:03:03.455367  290541 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:03:03.455388  290541 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:03:03.455420  290541 start.go:360] acquireMachinesLock for default-k8s-diff-port-960022: {Name:mk439ca411b17a34361cdf557c6ddd774780f327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:03:03.455524  290541 start.go:364] duration metric: took 84.004µs to acquireMachinesLock for "default-k8s-diff-port-960022"
	I1225 19:03:03.455550  290541 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:03.455636  290541 start.go:125] createHost starting for "" (driver="docker")
	W1225 19:03:01.663815  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:04.164023  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:03.753993  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:03:06.252227  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:03:02.771289  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:02.771684  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:02.771730  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:02.771779  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:02.802529  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:02.802556  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:02.802563  260034 cri.go:96] found id: ""
	I1225 19:03:02.802570  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:02.802620  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.806803  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.810869  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:02.810939  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:02.839325  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:02.839349  260034 cri.go:96] found id: ""
	I1225 19:03:02.839362  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:02.839411  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.843361  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:02.843426  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:02.872485  260034 cri.go:96] found id: ""
	I1225 19:03:02.872510  260034 logs.go:282] 0 containers: []
	W1225 19:03:02.872521  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:02.872528  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:02.872586  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:02.901050  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:02.901072  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:02.901077  260034 cri.go:96] found id: ""
	I1225 19:03:02.901084  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:02.901142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.905515  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.909197  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:02.909254  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:02.937731  260034 cri.go:96] found id: ""
	I1225 19:03:02.937764  260034 logs.go:282] 0 containers: []
	W1225 19:03:02.937775  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:02.937783  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:02.937832  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:02.969173  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:02.969196  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:02.969202  260034 cri.go:96] found id: ""
	I1225 19:03:02.969211  260034 logs.go:282] 2 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:02.969268  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.973335  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.978265  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:02.978337  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:03.012483  260034 cri.go:96] found id: ""
	I1225 19:03:03.012516  260034 logs.go:282] 0 containers: []
	W1225 19:03:03.012529  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:03.012538  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:03.012604  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:03.047542  260034 cri.go:96] found id: ""
	I1225 19:03:03.047569  260034 logs.go:282] 0 containers: []
	W1225 19:03:03.047579  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:03.047589  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:03:03.047610  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:03.080556  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:03.080581  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:03.118105  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:03.118131  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:03.147128  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:03:03.147153  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:03.178254  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:03.178281  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:03.213339  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:03.213363  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:03.243438  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:03.243464  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:03.272471  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:03.272500  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:03.328034  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:03.328064  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:03.364639  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:03.364667  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:03.457026  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:03.457060  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:03.471887  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:03.471925  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:03.543772  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:06.045368  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:03.457599  290541 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:03:03.457890  290541 start.go:159] libmachine.API.Create for "default-k8s-diff-port-960022" (driver="docker")
	I1225 19:03:03.457951  290541 client.go:173] LocalClient.Create starting
	I1225 19:03:03.458033  290541 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:03:03.458082  290541 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:03.458110  290541 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:03.458183  290541 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:03:03.458222  290541 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:03.458239  290541 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:03.458697  290541 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:03:03.478346  290541 cli_runner.go:211] docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:03:03.478429  290541 network_create.go:284] running [docker network inspect default-k8s-diff-port-960022] to gather additional debugging logs...
	I1225 19:03:03.478453  290541 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022
	W1225 19:03:03.498977  290541 cli_runner.go:211] docker network inspect default-k8s-diff-port-960022 returned with exit code 1
	I1225 19:03:03.499029  290541 network_create.go:287] error running [docker network inspect default-k8s-diff-port-960022]: docker network inspect default-k8s-diff-port-960022: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-960022 not found
	I1225 19:03:03.499046  290541 network_create.go:289] output of [docker network inspect default-k8s-diff-port-960022]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-960022 not found
	
	** /stderr **
	I1225 19:03:03.499185  290541 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:03.519019  290541 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:03:03.519988  290541 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:03:03.520982  290541 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:03:03.521987  290541 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5ae0820826f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:16:14:1f:73:da} reservation:<nil>}
	I1225 19:03:03.522949  290541 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7fdcf6cdd30d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:90:74:93:c0:40} reservation:<nil>}
	I1225 19:03:03.523500  290541 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f22c9f3db53f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:42:11:3a:34:ba:a9} reservation:<nil>}
	I1225 19:03:03.524949  290541 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f32d00}
	I1225 19:03:03.524992  290541 network_create.go:124] attempt to create docker network default-k8s-diff-port-960022 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1225 19:03:03.525055  290541 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 default-k8s-diff-port-960022
	I1225 19:03:03.579500  290541 network_create.go:108] docker network default-k8s-diff-port-960022 192.168.103.0/24 created
	I1225 19:03:03.579533  290541 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-960022" container
	I1225 19:03:03.579596  290541 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:03:03.598187  290541 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-960022 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:03:03.617904  290541 oci.go:103] Successfully created a docker volume default-k8s-diff-port-960022
	I1225 19:03:03.617974  290541 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-960022-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --entrypoint /usr/bin/test -v default-k8s-diff-port-960022:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:03:04.030742  290541 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-960022
	I1225 19:03:04.030817  290541 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:04.030833  290541 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:03:04.030928  290541 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-960022:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:03:07.889130  290541 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-960022:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.85814146s)
	I1225 19:03:07.889167  290541 kic.go:203] duration metric: took 3.858330464s to extract preloaded images to volume ...
	W1225 19:03:07.889258  290541 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:03:07.889302  290541 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:03:07.889350  290541 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:03:07.945593  290541 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-960022 --name default-k8s-diff-port-960022 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --network default-k8s-diff-port-960022 --ip 192.168.103.2 --volume default-k8s-diff-port-960022:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 19:03:08.221159  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Running}}
	I1225 19:03:08.238995  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:08.259084  290541 cli_runner.go:164] Run: docker exec default-k8s-diff-port-960022 stat /var/lib/dpkg/alternatives/iptables
	W1225 19:03:06.164061  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:08.164506  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:08.305099  290541 oci.go:144] the created container "default-k8s-diff-port-960022" has a running status.
	I1225 19:03:08.305135  290541 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa...
	I1225 19:03:08.458115  290541 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1225 19:03:08.487974  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:08.506555  290541 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1225 19:03:08.506576  290541 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-960022 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1225 19:03:08.556659  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:08.575407  290541 machine.go:94] provisionDockerMachine start ...
	I1225 19:03:08.575484  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:08.598509  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:08.598911  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:08.598937  290541 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:03:08.727165  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:03:08.727197  290541 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-960022"
	I1225 19:03:08.727268  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:08.746606  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:08.746933  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:08.746960  290541 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-960022 && echo "default-k8s-diff-port-960022" | sudo tee /etc/hostname
	I1225 19:03:08.881101  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:03:08.881206  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:08.903866  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:08.904122  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:08.904145  290541 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-960022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-960022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-960022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:03:09.027579  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:03:09.027625  290541 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:03:09.027665  290541 ubuntu.go:190] setting up certificates
	I1225 19:03:09.027678  290541 provision.go:84] configureAuth start
	I1225 19:03:09.027764  290541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:03:09.044970  290541 provision.go:143] copyHostCerts
	I1225 19:03:09.045033  290541 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:03:09.045044  290541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:03:09.045123  290541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:03:09.045226  290541 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:03:09.045235  290541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:03:09.045261  290541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:03:09.045328  290541 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:03:09.045335  290541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:03:09.045358  290541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:03:09.045888  290541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-960022 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-960022 localhost minikube]
	I1225 19:03:09.092526  290541 provision.go:177] copyRemoteCerts
	I1225 19:03:09.092585  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:03:09.092617  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.109947  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.202295  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1225 19:03:09.221345  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:03:09.238628  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:03:09.256546  290541 provision.go:87] duration metric: took 228.857085ms to configureAuth
	I1225 19:03:09.256572  290541 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:03:09.256741  290541 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:09.256845  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.275421  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:09.275621  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:09.275637  290541 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:03:09.532278  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:03:09.532307  290541 machine.go:97] duration metric: took 956.878726ms to provisionDockerMachine
	I1225 19:03:09.532318  290541 client.go:176] duration metric: took 6.074358023s to LocalClient.Create
	I1225 19:03:09.532337  290541 start.go:167] duration metric: took 6.074448934s to libmachine.API.Create "default-k8s-diff-port-960022"
	I1225 19:03:09.532343  290541 start.go:293] postStartSetup for "default-k8s-diff-port-960022" (driver="docker")
	I1225 19:03:09.532354  290541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:03:09.532419  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:03:09.532467  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.550263  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.642784  290541 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:03:09.646344  290541 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:03:09.646366  290541 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:03:09.646376  290541 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:03:09.646430  290541 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:03:09.646539  290541 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:03:09.646661  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:03:09.654261  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:03:09.674002  290541 start.go:296] duration metric: took 141.645847ms for postStartSetup
	I1225 19:03:09.674392  290541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:03:09.691409  290541 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:03:09.691669  290541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:03:09.691735  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.709864  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.797890  290541 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:03:09.802308  290541 start.go:128] duration metric: took 6.34665946s to createHost
	I1225 19:03:09.802331  290541 start.go:83] releasing machines lock for "default-k8s-diff-port-960022", held for 6.346794686s
	I1225 19:03:09.802417  290541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:03:09.820182  290541 ssh_runner.go:195] Run: cat /version.json
	I1225 19:03:09.820242  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.820250  290541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:03:09.820310  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.838443  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.838779  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.984002  290541 ssh_runner.go:195] Run: systemctl --version
	I1225 19:03:09.990474  290541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:03:10.025389  290541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:03:10.030215  290541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:03:10.030278  290541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:03:10.055379  290541 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 19:03:10.055398  290541 start.go:496] detecting cgroup driver to use...
	I1225 19:03:10.055428  290541 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:03:10.055477  290541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:03:10.071670  290541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:03:10.084033  290541 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:03:10.084084  290541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:03:10.100284  290541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:03:10.118126  290541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:03:10.204379  290541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:03:10.298103  290541 docker.go:234] disabling docker service ...
	I1225 19:03:10.298179  290541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:03:10.318426  290541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:03:10.331202  290541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:03:10.418713  290541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:03:10.508858  290541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:03:10.521817  290541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:03:10.536454  290541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:03:10.536505  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.546955  290541 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:03:10.547041  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.555738  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.564495  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.573237  290541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:03:10.581102  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.589368  290541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.602102  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.610491  290541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:03:10.617558  290541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:03:10.624678  290541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:10.707455  290541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:03:10.848512  290541 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:03:10.848583  290541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:03:10.852823  290541 start.go:574] Will wait 60s for crictl version
	I1225 19:03:10.852874  290541 ssh_runner.go:195] Run: which crictl
	I1225 19:03:10.856546  290541 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:03:10.882737  290541 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:03:10.882811  290541 ssh_runner.go:195] Run: crio --version
	I1225 19:03:10.908556  290541 ssh_runner.go:195] Run: crio --version
	I1225 19:03:10.936701  290541 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1225 19:03:08.252388  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:03:10.253970  281279 pod_ready.go:94] pod "coredns-7d764666f9-lqvms" is "Ready"
	I1225 19:03:10.254003  281279 pod_ready.go:86] duration metric: took 37.507239153s for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.256491  281279 pod_ready.go:83] waiting for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.270319  281279 pod_ready.go:94] pod "etcd-no-preload-148352" is "Ready"
	I1225 19:03:10.270349  281279 pod_ready.go:86] duration metric: took 13.833526ms for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.357545  281279 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.362136  281279 pod_ready.go:94] pod "kube-apiserver-no-preload-148352" is "Ready"
	I1225 19:03:10.362165  281279 pod_ready.go:86] duration metric: took 4.592851ms for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.364693  281279 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.453477  281279 pod_ready.go:94] pod "kube-controller-manager-no-preload-148352" is "Ready"
	I1225 19:03:10.453527  281279 pod_ready.go:86] duration metric: took 88.778375ms for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.655405  281279 pod_ready.go:83] waiting for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.050821  281279 pod_ready.go:94] pod "kube-proxy-j2p4x" is "Ready"
	I1225 19:03:11.050848  281279 pod_ready.go:86] duration metric: took 395.411494ms for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.251357  281279 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.650977  281279 pod_ready.go:94] pod "kube-scheduler-no-preload-148352" is "Ready"
	I1225 19:03:11.650999  281279 pod_ready.go:86] duration metric: took 399.61097ms for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.651010  281279 pod_ready.go:40] duration metric: took 38.907995238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:03:11.698020  281279 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:03:11.700129  281279 out.go:179] * Done! kubectl is now configured to use "no-preload-148352" cluster and "default" namespace by default
	I1225 19:03:10.937944  290541 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:10.955707  290541 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1225 19:03:10.959652  290541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:03:10.969859  290541 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:03:10.970004  290541 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:10.970067  290541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:03:11.000927  290541 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:03:11.000960  290541 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:03:11.001017  290541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:03:11.025392  290541 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:03:11.025414  290541 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:03:11.025425  290541 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1225 19:03:11.025513  290541 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-960022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:03:11.025590  290541 ssh_runner.go:195] Run: crio config
	I1225 19:03:11.076734  290541 cni.go:84] Creating CNI manager for ""
	I1225 19:03:11.076756  290541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:11.076773  290541 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:03:11.076802  290541 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-960022 NodeName:default-k8s-diff-port-960022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:03:11.076995  290541 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-960022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:03:11.077093  290541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:03:11.086253  290541 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:03:11.086316  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:03:11.095352  290541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1225 19:03:11.110084  290541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:03:11.128766  290541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1225 19:03:11.141591  290541 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:03:11.145207  290541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:03:11.156484  290541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:11.252998  290541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:03:11.278658  290541 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022 for IP: 192.168.103.2
	I1225 19:03:11.278680  290541 certs.go:195] generating shared ca certs ...
	I1225 19:03:11.278706  290541 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.279070  290541 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:03:11.279143  290541 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:03:11.279160  290541 certs.go:257] generating profile certs ...
	I1225 19:03:11.279236  290541 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key
	I1225 19:03:11.279251  290541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.crt with IP's: []
	I1225 19:03:11.311270  290541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.crt ...
	I1225 19:03:11.311306  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.crt: {Name:mk32536f2e89a3eda9585f7095b2d94b4d0d92fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.311516  290541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key ...
	I1225 19:03:11.311537  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key: {Name:mk9b6414010a81635dab73577843147d7842ae32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.311696  290541 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c
	I1225 19:03:11.311722  290541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1225 19:03:11.378381  290541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c ...
	I1225 19:03:11.378405  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c: {Name:mk0de737dcfd45542b929ddc2fcb19b22cc1d79d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.378580  290541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c ...
	I1225 19:03:11.378597  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c: {Name:mkb082fb82d4aa0c55c71dc96dfbcbbd4a1f57b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.378703  290541 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt
	I1225 19:03:11.378790  290541 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key
	I1225 19:03:11.378874  290541 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key
	I1225 19:03:11.378912  290541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt with IP's: []
	I1225 19:03:11.435262  290541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt ...
	I1225 19:03:11.435289  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt: {Name:mk957cdcdb598703fddf6148360e81b85418c70a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.435458  290541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key ...
	I1225 19:03:11.435479  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key: {Name:mk14ca2d78a55c3fdc968bd5cd9741d839de08ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.435696  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:03:11.435745  290541 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:03:11.435762  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:03:11.435799  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:03:11.435834  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:03:11.435868  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:03:11.435941  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:03:11.436682  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:03:11.457965  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:03:11.476251  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:03:11.494329  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:03:11.511365  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1225 19:03:11.529885  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:03:11.548550  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:03:11.565493  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:03:11.582435  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:03:11.600828  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:03:11.618667  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:03:11.636460  290541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:03:11.649422  290541 ssh_runner.go:195] Run: openssl version
	I1225 19:03:11.656415  290541 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.665217  290541 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:03:11.674289  290541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.678053  290541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.678108  290541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.717182  290541 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:03:11.726661  290541 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91122.pem /etc/ssl/certs/3ec20f2e.0
	I1225 19:03:11.735492  290541 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.742735  290541 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:03:11.750048  290541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.754134  290541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.754185  290541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.794412  290541 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:03:11.803581  290541 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 19:03:11.812000  290541 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.821162  290541 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:03:11.829185  290541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.833291  290541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.833342  290541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.868441  290541 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:03:11.875949  290541 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9112.pem /etc/ssl/certs/51391683.0
	I1225 19:03:11.883260  290541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:03:11.886717  290541 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 19:03:11.886773  290541 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:11.886857  290541 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:03:11.886922  290541 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:03:11.913305  290541 cri.go:96] found id: ""
	I1225 19:03:11.913362  290541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:03:11.921550  290541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:03:11.929740  290541 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 19:03:11.929783  290541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:03:11.937471  290541 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 19:03:11.937496  290541 kubeadm.go:158] found existing configuration files:
	
	I1225 19:03:11.937536  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1225 19:03:11.944801  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 19:03:11.944840  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 19:03:11.953174  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1225 19:03:11.960603  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 19:03:11.960649  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 19:03:11.969760  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1225 19:03:11.978412  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 19:03:11.978467  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:03:11.986927  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1225 19:03:11.995043  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 19:03:11.995105  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:03:12.003077  290541 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 19:03:12.042697  290541 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1225 19:03:12.042775  290541 kubeadm.go:319] [preflight] Running pre-flight checks
	I1225 19:03:12.075292  290541 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1225 19:03:12.075403  290541 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1225 19:03:12.075476  290541 kubeadm.go:319] OS: Linux
	I1225 19:03:12.075561  290541 kubeadm.go:319] CGROUPS_CPU: enabled
	I1225 19:03:12.075626  290541 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1225 19:03:12.075710  290541 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1225 19:03:12.075786  290541 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1225 19:03:12.075879  290541 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1225 19:03:12.075984  290541 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1225 19:03:12.076060  290541 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1225 19:03:12.076126  290541 kubeadm.go:319] CGROUPS_IO: enabled
	I1225 19:03:12.137914  290541 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 19:03:12.138081  290541 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 19:03:12.138228  290541 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1225 19:03:12.146676  290541 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 19:03:11.047991  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:03:11.048064  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:11.048128  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:11.078352  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:11.078375  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:11.078381  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:11.078386  260034 cri.go:96] found id: ""
	I1225 19:03:11.078394  260034 logs.go:282] 3 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:11.078452  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.082676  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.086760  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.090483  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:11.090541  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:11.121886  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:11.121925  260034 cri.go:96] found id: ""
	I1225 19:03:11.121936  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:11.121995  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.126770  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:11.126850  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:11.154969  260034 cri.go:96] found id: ""
	I1225 19:03:11.154993  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.155004  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:11.155011  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:11.155069  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:11.187513  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:11.187537  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:11.187542  260034 cri.go:96] found id: ""
	I1225 19:03:11.187552  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:11.187623  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.193142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.199845  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:11.199935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:11.229683  260034 cri.go:96] found id: ""
	I1225 19:03:11.229706  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.229714  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:11.229718  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:11.229763  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:11.256771  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:11.256791  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:11.256799  260034 cri.go:96] found id: ""
	I1225 19:03:11.256806  260034 logs.go:282] 2 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:11.256855  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.260853  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.264338  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:11.264393  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:11.295945  260034 cri.go:96] found id: ""
	I1225 19:03:11.295967  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.295975  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:11.295980  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:11.296032  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:11.326722  260034 cri.go:96] found id: ""
	I1225 19:03:11.326746  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.326757  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:11.326767  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:11.326780  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:11.377016  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:11.377049  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:11.407203  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:03:11.407231  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:11.438636  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:11.438661  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:11.466434  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:03:11.466461  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:11.492372  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:11.492398  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:11.523343  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:11.523370  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:11.603534  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:11.603561  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:11.617107  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:11.617133  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 19:03:12.148720  290541 out.go:252]   - Generating certificates and keys ...
	I1225 19:03:12.148820  290541 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1225 19:03:12.148941  290541 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1225 19:03:12.492963  290541 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 19:03:12.589526  290541 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1225 19:03:12.781681  290541 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1225 19:03:12.911174  290541 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1225 19:03:10.663757  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:13.163492  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:13.287408  290541 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1225 19:03:13.287535  290541 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-960022 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1225 19:03:13.591464  290541 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1225 19:03:13.591721  290541 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-960022 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1225 19:03:13.735044  290541 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 19:03:14.135286  290541 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 19:03:14.247035  290541 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1225 19:03:14.247172  290541 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 19:03:14.380656  290541 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 19:03:14.587167  290541 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1225 19:03:14.666880  290541 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 19:03:14.881639  290541 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 19:03:15.249231  290541 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 19:03:15.249827  290541 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 19:03:15.253689  290541 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 19:03:15.256473  290541 out.go:252]   - Booting up control plane ...
	I1225 19:03:15.256600  290541 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 19:03:15.256710  290541 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 19:03:15.257363  290541 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 19:03:15.270872  290541 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 19:03:15.271019  290541 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 19:03:15.277503  290541 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 19:03:15.277804  290541 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 19:03:15.277867  290541 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 19:03:15.382073  290541 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 19:03:15.382235  290541 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 19:03:15.883858  290541 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.839176ms
	I1225 19:03:15.886634  290541 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 19:03:15.886785  290541 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1225 19:03:15.886884  290541 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 19:03:15.887000  290541 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1225 19:03:17.525448  290541 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.638650065s
	I1225 19:03:17.662016  290541 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.775331698s
	W1225 19:03:15.164048  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:17.164424  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:19.388073  290541 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501352532s
	I1225 19:03:19.406371  290541 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 19:03:19.415951  290541 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 19:03:19.425057  290541 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 19:03:19.425375  290541 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-960022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 19:03:19.432994  290541 kubeadm.go:319] [bootstrap-token] Using token: dqiqgc.7rvz0zi3i4hgo1bx
	I1225 19:03:19.434207  290541 out.go:252]   - Configuring RBAC rules ...
	I1225 19:03:19.434361  290541 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 19:03:19.437220  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 19:03:19.441812  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 19:03:19.444110  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 19:03:19.447043  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 19:03:19.449197  290541 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 19:03:19.797886  290541 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 19:03:20.209580  290541 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1225 19:03:20.796262  290541 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1225 19:03:20.797070  290541 kubeadm.go:319] 
	I1225 19:03:20.797174  290541 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1225 19:03:20.797193  290541 kubeadm.go:319] 
	I1225 19:03:20.797285  290541 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1225 19:03:20.797295  290541 kubeadm.go:319] 
	I1225 19:03:20.797331  290541 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1225 19:03:20.797402  290541 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 19:03:20.797473  290541 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 19:03:20.797492  290541 kubeadm.go:319] 
	I1225 19:03:20.797591  290541 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1225 19:03:20.797608  290541 kubeadm.go:319] 
	I1225 19:03:20.797671  290541 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 19:03:20.797681  290541 kubeadm.go:319] 
	I1225 19:03:20.797764  290541 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1225 19:03:20.797877  290541 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 19:03:20.797994  290541 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 19:03:20.798008  290541 kubeadm.go:319] 
	I1225 19:03:20.798138  290541 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 19:03:20.798263  290541 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1225 19:03:20.798274  290541 kubeadm.go:319] 
	I1225 19:03:20.798394  290541 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token dqiqgc.7rvz0zi3i4hgo1bx \
	I1225 19:03:20.798536  290541 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 \
	I1225 19:03:20.798569  290541 kubeadm.go:319] 	--control-plane 
	I1225 19:03:20.798582  290541 kubeadm.go:319] 
	I1225 19:03:20.798693  290541 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1225 19:03:20.798700  290541 kubeadm.go:319] 
	I1225 19:03:20.798773  290541 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token dqiqgc.7rvz0zi3i4hgo1bx \
	I1225 19:03:20.798877  290541 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 
	I1225 19:03:20.801807  290541 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 19:03:20.801946  290541 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 19:03:20.801992  290541 cni.go:84] Creating CNI manager for ""
	I1225 19:03:20.802005  290541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:20.804040  290541 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1225 19:03:21.679630  260034 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062480115s)
	W1225 19:03:21.679674  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1225 19:03:21.679685  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:21.679703  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:21.721645  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:21.721682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:21.755144  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:21.755179  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:21.783461  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:21.783485  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	W1225 19:03:19.665295  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:21.663961  283722 pod_ready.go:94] pod "coredns-66bc5c9577-n4nqj" is "Ready"
	I1225 19:03:21.663995  283722 pod_ready.go:86] duration metric: took 35.505500978s for pod "coredns-66bc5c9577-n4nqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.666425  283722 pod_ready.go:83] waiting for pod "etcd-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.670402  283722 pod_ready.go:94] pod "etcd-embed-certs-684693" is "Ready"
	I1225 19:03:21.670429  283722 pod_ready.go:86] duration metric: took 3.974917ms for pod "etcd-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.672351  283722 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.676345  283722 pod_ready.go:94] pod "kube-apiserver-embed-certs-684693" is "Ready"
	I1225 19:03:21.676369  283722 pod_ready.go:86] duration metric: took 3.998184ms for pod "kube-apiserver-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.678331  283722 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.862122  283722 pod_ready.go:94] pod "kube-controller-manager-embed-certs-684693" is "Ready"
	I1225 19:03:21.862153  283722 pod_ready.go:86] duration metric: took 183.798503ms for pod "kube-controller-manager-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:22.062056  283722 pod_ready.go:83] waiting for pod "kube-proxy-wzb26" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:22.461800  283722 pod_ready.go:94] pod "kube-proxy-wzb26" is "Ready"
	I1225 19:03:22.461830  283722 pod_ready.go:86] duration metric: took 399.750088ms for pod "kube-proxy-wzb26" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:22.662801  283722 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:23.062729  283722 pod_ready.go:94] pod "kube-scheduler-embed-certs-684693" is "Ready"
	I1225 19:03:23.062758  283722 pod_ready.go:86] duration metric: took 399.920395ms for pod "kube-scheduler-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:23.062772  283722 pod_ready.go:40] duration metric: took 36.908039298s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:03:23.108169  283722 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:03:23.110150  283722 out.go:179] * Done! kubectl is now configured to use "embed-certs-684693" cluster and "default" namespace by default
	I1225 19:03:20.805144  290541 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 19:03:20.809668  290541 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1225 19:03:20.809688  290541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1225 19:03:20.823209  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 19:03:21.032445  290541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 19:03:21.032523  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:21.032561  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-960022 minikube.k8s.io/updated_at=2025_12_25T19_03_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef minikube.k8s.io/name=default-k8s-diff-port-960022 minikube.k8s.io/primary=true
	I1225 19:03:21.127957  290541 ops.go:34] apiserver oom_adj: -16
	I1225 19:03:21.128106  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:21.628946  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:22.129120  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:22.628948  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:23.129126  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:23.628802  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:24.129108  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:24.629105  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:25.128534  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:25.629116  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:25.705041  290541 kubeadm.go:1114] duration metric: took 4.672590994s to wait for elevateKubeSystemPrivileges
	I1225 19:03:25.705078  290541 kubeadm.go:403] duration metric: took 13.818308582s to StartCluster
	I1225 19:03:25.705101  290541 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:25.705173  290541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:25.707684  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:25.707952  290541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 19:03:25.707983  290541 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:25.708020  290541 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:03:25.708116  290541 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-960022"
	I1225 19:03:25.708153  290541 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-960022"
	I1225 19:03:25.708165  290541 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-960022"
	I1225 19:03:25.708184  290541 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-960022"
	I1225 19:03:25.708194  290541 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:03:25.708207  290541 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:25.708576  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:25.708757  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:25.710353  290541 out.go:179] * Verifying Kubernetes components...
	I1225 19:03:25.711592  290541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:25.738516  290541 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-960022"
	I1225 19:03:25.739025  290541 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:03:25.739252  290541 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:03:25.739542  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:25.741329  290541 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:03:25.741352  290541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:03:25.741401  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:25.775854  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:25.777786  290541 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:03:25.777813  290541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:03:25.777870  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:25.805486  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:25.821949  290541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 19:03:25.881126  290541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:03:25.901559  290541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:03:25.929322  290541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:03:26.031302  290541 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1225 19:03:26.033114  290541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-960022" to be "Ready" ...
	
	
	==> CRI-O <==
	Dec 25 19:02:49 no-preload-148352 crio[569]: time="2025-12-25T19:02:49.910258809Z" level=info msg="Started container" PID=1770 containerID=af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper id=5dfa393c-af4e-41fa-ba19-e321b2cf219e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c803bd78efb4feff7779659502bc08b89d9f59fc19fe72a7696ebc331d76a452
	Dec 25 19:02:49 no-preload-148352 crio[569]: time="2025-12-25T19:02:49.941708482Z" level=info msg="Removing container: 27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d" id=66a01e73-8280-4727-9c95-bd3e72c02d04 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:49 no-preload-148352 crio[569]: time="2025-12-25T19:02:49.955957866Z" level=info msg="Removed container 27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=66a01e73-8280-4727-9c95-bd3e72c02d04 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.976934133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a5075958-f5c4-460e-8275-ee2732d1ec9a name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.977990889Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9feeff14-1b9f-4fb3-a009-58c929da05f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.979406005Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=60f812f4-69ef-440f-93cd-52e3e5706096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.979545717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984405264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984598987Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ea9b372ddef8f2be5018569109579d14a1239fecdc7517dfbea98c7d671f819c/merged/etc/passwd: no such file or directory"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984635096Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ea9b372ddef8f2be5018569109579d14a1239fecdc7517dfbea98c7d671f819c/merged/etc/group: no such file or directory"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984948099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:03 no-preload-148352 crio[569]: time="2025-12-25T19:03:03.0170308Z" level=info msg="Created container a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43: kube-system/storage-provisioner/storage-provisioner" id=60f812f4-69ef-440f-93cd-52e3e5706096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:03 no-preload-148352 crio[569]: time="2025-12-25T19:03:03.017676026Z" level=info msg="Starting container: a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43" id=d9c248bc-a0c9-4243-a0fc-b9a0cfcee170 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:03 no-preload-148352 crio[569]: time="2025-12-25T19:03:03.019721168Z" level=info msg="Started container" PID=1784 containerID=a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43 description=kube-system/storage-provisioner/storage-provisioner id=d9c248bc-a0c9-4243-a0fc-b9a0cfcee170 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e58ed4bd2a110888dfd64ef40bb73272e399aa93aa49ce5ad9e1a2920905b380
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.856765053Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fea4ee6e-52d3-4c81-bb9e-d6549a139f24 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.858037102Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f21fe86d-1627-4aca-b9fc-92b3f91e7829 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.859081162Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=f649d2df-bef6-48a4-9c1b-138fef61e68b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.859234347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.865986801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.866436887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.893012211Z" level=info msg="Created container 4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=f649d2df-bef6-48a4-9c1b-138fef61e68b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.893679125Z" level=info msg="Starting container: 4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b" id=9b37b4ef-2b84-41c0-b815-707ca6109487 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.896039791Z" level=info msg="Started container" PID=1820 containerID=4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper id=9b37b4ef-2b84-41c0-b815-707ca6109487 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c803bd78efb4feff7779659502bc08b89d9f59fc19fe72a7696ebc331d76a452
	Dec 25 19:03:17 no-preload-148352 crio[569]: time="2025-12-25T19:03:17.014368721Z" level=info msg="Removing container: af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42" id=8f11160c-d919-426d-9f24-e7eefaa16086 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:17 no-preload-148352 crio[569]: time="2025-12-25T19:03:17.025297352Z" level=info msg="Removed container af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=8f11160c-d919-426d-9f24-e7eefaa16086 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4d94c5064f594       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   c803bd78efb4f       dashboard-metrics-scraper-867fb5f87b-gbfkd   kubernetes-dashboard
	a92c9aa96d754       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   e58ed4bd2a110       storage-provisioner                          kube-system
	901f76356987e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   26c6f721380f5       kubernetes-dashboard-b84665fb8-5ngsn         kubernetes-dashboard
	58b0d8852ca6e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   f88a7476501fc       busybox                                      default
	cd48b0389f086       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   116d413a07191       coredns-7d764666f9-lqvms                     kube-system
	e3e24c594c2e9       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   1c154aad786b6       kindnet-jx25d                                kube-system
	cc74c6a68e0e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   e58ed4bd2a110       storage-provisioner                          kube-system
	55f12125d0d2e       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           54 seconds ago      Running             kube-proxy                  0                   7e635c0ff1e6d       kube-proxy-j2p4x                             kube-system
	2f3a4cbe6949d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           57 seconds ago      Running             kube-controller-manager     0                   4036767cc992f       kube-controller-manager-no-preload-148352    kube-system
	bb2011f8a3910       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        0                   b8df7d7974bf9       etcd-no-preload-148352                       kube-system
	47366819032b3       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           57 seconds ago      Running             kube-apiserver              0                   afce916cd6f23       kube-apiserver-no-preload-148352             kube-system
	aa7daa7b6db66       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           57 seconds ago      Running             kube-scheduler              0                   14060525355de       kube-scheduler-no-preload-148352             kube-system
	
	
	==> coredns [cd48b0389f0865406b664205dcf7168f2c40b064af72c3b306f1eaf26e9b9128] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48013 - 64132 "HINFO IN 6549156959973943132.7448233143510595798. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029508599s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-148352
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-148352
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=no-preload-148352
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_01_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:01:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-148352
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:02:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-148352
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                de63609a-6f51-4a32-ad70-d0138650b5f8
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-lqvms                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-148352                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-jx25d                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-148352              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-148352     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-j2p4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-148352              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-gbfkd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-5ngsn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-148352 event: Registered Node no-preload-148352 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-148352 event: Registered Node no-preload-148352 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [bb2011f8a39109b797fb7b1bf01cff317738a18c03f9c14941817a74f2e323b6] <==
	{"level":"info","ts":"2025-12-25T19:02:29.435282Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-25T19:02:29.435303Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-25T19:02:29.435369Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:02:29.435409Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:02:29.435419Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:02:29.436091Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:02:29.436121Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:02:30.322204Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-25T19:02:30.322253Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:02:30.322329Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-25T19:02:30.322342Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:02:30.322359Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323161Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323175Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:02:30.323188Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323199Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323814Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-148352 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:02:30.323821Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:02:30.323854Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:02:30.324132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:02:30.324221Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:02:30.325652Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:02:30.325864Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:02:30.328745Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-25T19:02:30.328821Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:03:27 up 45 min,  0 user,  load average: 2.81, 2.48, 1.80
	Linux no-preload-148352 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3e24c594c2e90a5b96c7c7292be2263392feb5e70d00b1ec00eb84d2a0fbf17] <==
	I1225 19:02:32.464934       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:02:32.465232       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1225 19:02:32.465384       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:02:32.465409       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:02:32.465440       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:02:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:02:32.665131       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:02:32.665177       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:02:32.665191       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:02:32.665569       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:02:32.965382       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:02:32.965410       1 metrics.go:72] Registering metrics
	I1225 19:02:32.965465       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:42.665040       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:02:42.665125       1 main.go:301] handling current node
	I1225 19:02:52.665540       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:02:52.665574       1 main.go:301] handling current node
	I1225 19:03:02.665307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:03:02.665340       1 main.go:301] handling current node
	I1225 19:03:12.666191       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:03:12.666240       1 main.go:301] handling current node
	I1225 19:03:22.666085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:03:22.666149       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47366819032b30036912ff5f63dfa944e254928f33476aba04aaf69af88aaf71] <==
	I1225 19:02:31.250416       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1225 19:02:31.250655       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:31.251082       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:02:31.251293       1 aggregator.go:187] initial CRD sync complete...
	I1225 19:02:31.251308       1 autoregister_controller.go:144] Starting autoregister controller
	I1225 19:02:31.251315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:02:31.251320       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:02:31.251463       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1225 19:02:31.251484       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:02:31.251590       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:31.256275       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1225 19:02:31.257446       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1225 19:02:31.263121       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:02:31.278553       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:02:31.511216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:02:31.538659       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:02:31.555647       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:02:31.561985       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:02:31.567853       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:02:31.600579       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.117.81"}
	I1225 19:02:31.611464       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.85.114"}
	I1225 19:02:32.157204       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1225 19:02:34.776435       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:02:34.824840       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:02:35.024250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2f3a4cbe6949d2645c6993b4cc7109abf638d7d4a738d0209ae98d0d57e87c1b] <==
	I1225 19:02:34.379108       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.379439       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1225 19:02:34.379426       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1225 19:02:34.380004       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:34.380010       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380061       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380110       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380129       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380152       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380212       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380275       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380293       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380463       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.382305       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.383150       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.385214       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:34.387963       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.387981       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.387989       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.388014       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.401416       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.479760       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.479780       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1225 19:02:34.479784       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1225 19:02:34.485374       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [55f12125d0d2e0b7f466cdebd8a8770b9c7062b5f540d2dcaf8cca748d880059] <==
	I1225 19:02:32.257801       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:02:32.329041       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:32.429475       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:32.429509       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1225 19:02:32.429615       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:02:32.447868       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:02:32.447958       1 server_linux.go:136] "Using iptables Proxier"
	I1225 19:02:32.453260       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:02:32.453566       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1225 19:02:32.453592       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:32.455613       1 config.go:309] "Starting node config controller"
	I1225 19:02:32.455772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:02:32.455806       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:02:32.455914       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:02:32.455954       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:02:32.455916       1 config.go:200] "Starting service config controller"
	I1225 19:02:32.456032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:02:32.455928       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:02:32.456092       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:02:32.556416       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:02:32.556429       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 19:02:32.556445       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [aa7daa7b6db664c65cb970f6372118ff3edf3e9ed558da28a08f0e134f753051] <==
	I1225 19:02:29.750975       1 serving.go:386] Generated self-signed cert in-memory
	W1225 19:02:31.178961       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 19:02:31.178999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 19:02:31.179011       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 19:02:31.179021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 19:02:31.208364       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1225 19:02:31.208414       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:31.214502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:31.214541       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:31.214655       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:02:31.215504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:02:31.315354       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 25 19:02:48 no-preload-148352 kubelet[724]: E1225 19:02:48.934774     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-148352" containerName="etcd"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: E1225 19:02:49.857058     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: I1225 19:02:49.857127     724 scope.go:122] "RemoveContainer" containerID="27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: I1225 19:02:49.939698     724 scope.go:122] "RemoveContainer" containerID="27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: E1225 19:02:49.939984     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: I1225 19:02:49.940017     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: E1225 19:02:49.940217     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:02:52 no-preload-148352 kubelet[724]: E1225 19:02:52.304142     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:02:52 no-preload-148352 kubelet[724]: I1225 19:02:52.304191     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:02:52 no-preload-148352 kubelet[724]: E1225 19:02:52.304411     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:03:02 no-preload-148352 kubelet[724]: I1225 19:03:02.976417     724 scope.go:122] "RemoveContainer" containerID="cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc"
	Dec 25 19:03:10 no-preload-148352 kubelet[724]: E1225 19:03:10.230249     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lqvms" containerName="coredns"
	Dec 25 19:03:16 no-preload-148352 kubelet[724]: E1225 19:03:16.856123     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:03:16 no-preload-148352 kubelet[724]: I1225 19:03:16.856176     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: I1225 19:03:17.013049     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: E1225 19:03:17.013281     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: I1225 19:03:17.013319     724 scope.go:122] "RemoveContainer" containerID="4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: E1225 19:03:17.013521     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:03:22 no-preload-148352 kubelet[724]: E1225 19:03:22.304419     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:03:22 no-preload-148352 kubelet[724]: I1225 19:03:22.304460     724 scope.go:122] "RemoveContainer" containerID="4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	Dec 25 19:03:22 no-preload-148352 kubelet[724]: E1225 19:03:22.304643     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:03:23 no-preload-148352 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:03:23 no-preload-148352 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:03:23 no-preload-148352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:03:23 no-preload-148352 systemd[1]: kubelet.service: Consumed 1.773s CPU time.
	
	
	==> kubernetes-dashboard [901f76356987e3e596f87ef92b962ce67c143eef3f37a7b4ac37dbde884cecae] <==
	2025/12/25 19:02:40 Using namespace: kubernetes-dashboard
	2025/12/25 19:02:40 Using in-cluster config to connect to apiserver
	2025/12/25 19:02:40 Using secret token for csrf signing
	2025/12/25 19:02:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:02:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:02:40 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/25 19:02:40 Generating JWE encryption key
	2025/12/25 19:02:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:02:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:02:40 Initializing JWE encryption key from synchronized object
	2025/12/25 19:02:40 Creating in-cluster Sidecar client
	2025/12/25 19:02:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:40 Serving insecurely on HTTP port: 9090
	2025/12/25 19:03:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:40 Starting overwatch
	
	
	==> storage-provisioner [a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43] <==
	I1225 19:03:03.033937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:03:03.042604       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:03:03.042657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:03:03.044804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:06.499778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:10.760268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:14.359017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:17.413612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:20.436355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:20.440724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:20.440883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:03:20.440949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f3e1ed8-81d0-4039-80b9-a2f1ed9a1f41", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-148352_6f856ba6-fd2f-4dc6-9a66-7f9f70461a64 became leader
	I1225 19:03:20.441037       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-148352_6f856ba6-fd2f-4dc6-9a66-7f9f70461a64!
	W1225 19:03:20.442693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:20.445998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:20.541300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-148352_6f856ba6-fd2f-4dc6-9a66-7f9f70461a64!
	W1225 19:03:22.449492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:22.454378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:24.458502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:24.463715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:26.467886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:26.474783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc] <==
	I1225 19:02:32.225172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:03:02.229535       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-148352 -n no-preload-148352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-148352 -n no-preload-148352: exit status 2 (350.735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-148352 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-148352
helpers_test.go:244: (dbg) docker inspect no-preload-148352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc",
	        "Created": "2025-12-25T19:01:06.66476254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:02:22.656378334Z",
	            "FinishedAt": "2025-12-25T19:02:21.402997407Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/hosts",
	        "LogPath": "/var/lib/docker/containers/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc/41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc-json.log",
	        "Name": "/no-preload-148352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-148352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-148352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41819bf1bd4bc2d54346cbc83d4feefe6b78f5e9c433c26cf65f99a4307626cc",
	                "LowerDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce53440f3336a56e5d3b7cdce9b0468a1a553e258f9f62a74535927ca0c65775/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-148352",
	                "Source": "/var/lib/docker/volumes/no-preload-148352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-148352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-148352",
	                "name.minikube.sigs.k8s.io": "no-preload-148352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5794287c56d63287f24d27d6403a0481c248bdbbd997eb01b1d0757b39dc7467",
	            "SandboxKey": "/var/run/docker/netns/5794287c56d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-148352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7fdcf6cdd30d0ba02321a77fbb55e094d77a371075d285e3dbc5b2c78f7f50f7",
	                    "EndpointID": "ef793fdf1bb46a34f49e79712ce3ef6da23e74ec06ec9d0199b8c4dbd1d47493",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "be:3d:b2:08:fc:e2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-148352",
	                        "41819bf1bd4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352: exit status 2 (352.413364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-148352 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-148352 logs -n 25: (1.069017554s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-002470       │ jenkins │ v1.37.0 │ 25 Dec 25 19:00 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p cert-expiration-002470                                                                                                                                                                                                                     │ cert-expiration-002470       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ delete  │ -p running-upgrade-861192                                                                                                                                                                                                                     │ running-upgrade-861192       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                               │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                     │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                     │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                               │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                    │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:03:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:03:03.260659  290541 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:03.260750  290541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:03.260758  290541 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:03.260763  290541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:03.260972  290541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:03.261480  290541 out.go:368] Setting JSON to false
	I1225 19:03:03.262644  290541 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2731,"bootTime":1766686652,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:03:03.262708  290541 start.go:143] virtualization: kvm guest
	I1225 19:03:03.264770  290541 out.go:179] * [default-k8s-diff-port-960022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:03:03.266042  290541 notify.go:221] Checking for updates...
	I1225 19:03:03.266057  290541 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:03:03.267429  290541 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:03:03.269101  290541 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:03.270287  290541 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:03:03.272709  290541 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:03:03.273925  290541 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:03:03.275633  290541 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:03.275725  290541 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:03.275830  290541 config.go:182] Loaded profile config "no-preload-148352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:03.275955  290541 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:03:03.301169  290541 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:03:03.301259  290541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:03.362636  290541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:03.351593327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:03.362744  290541 docker.go:319] overlay module found
	I1225 19:03:03.365026  290541 out.go:179] * Using the docker driver based on user configuration
	I1225 19:03:03.366911  290541 start.go:309] selected driver: docker
	I1225 19:03:03.366928  290541 start.go:928] validating driver "docker" against <nil>
	I1225 19:03:03.366943  290541 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:03:03.367476  290541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:03.425793  290541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:03.416183241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:03.425998  290541 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:03:03.426447  290541 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:03:03.427956  290541 out.go:179] * Using Docker driver with root privileges
	I1225 19:03:03.429194  290541 cni.go:84] Creating CNI manager for ""
	I1225 19:03:03.429263  290541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:03.429275  290541 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:03:03.429344  290541 start.go:353] cluster config:
	{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:03.430731  290541 out.go:179] * Starting "default-k8s-diff-port-960022" primary control-plane node in "default-k8s-diff-port-960022" cluster
	I1225 19:03:03.431854  290541 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:03:03.432973  290541 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:03:03.433946  290541 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:03.433975  290541 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:03:03.433988  290541 cache.go:65] Caching tarball of preloaded images
	I1225 19:03:03.434048  290541 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:03:03.434083  290541 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:03:03.434099  290541 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:03:03.434224  290541 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:03:03.434249  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json: {Name:mk23e95983e818b85162d68edd988fdf930d6200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:03.455337  290541 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:03:03.455367  290541 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:03:03.455388  290541 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:03:03.455420  290541 start.go:360] acquireMachinesLock for default-k8s-diff-port-960022: {Name:mk439ca411b17a34361cdf557c6ddd774780f327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:03:03.455524  290541 start.go:364] duration metric: took 84.004µs to acquireMachinesLock for "default-k8s-diff-port-960022"
	I1225 19:03:03.455550  290541 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:03.455636  290541 start.go:125] createHost starting for "" (driver="docker")
	W1225 19:03:01.663815  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:04.164023  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:03.753993  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	W1225 19:03:06.252227  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:03:02.771289  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:02.771684  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:02.771730  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:02.771779  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:02.802529  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:02.802556  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:02.802563  260034 cri.go:96] found id: ""
	I1225 19:03:02.802570  260034 logs.go:282] 2 containers: [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:02.802620  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.806803  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.810869  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:02.810939  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:02.839325  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:02.839349  260034 cri.go:96] found id: ""
	I1225 19:03:02.839362  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:02.839411  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.843361  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:02.843426  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:02.872485  260034 cri.go:96] found id: ""
	I1225 19:03:02.872510  260034 logs.go:282] 0 containers: []
	W1225 19:03:02.872521  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:02.872528  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:02.872586  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:02.901050  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:02.901072  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:02.901077  260034 cri.go:96] found id: ""
	I1225 19:03:02.901084  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:02.901142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.905515  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.909197  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:02.909254  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:02.937731  260034 cri.go:96] found id: ""
	I1225 19:03:02.937764  260034 logs.go:282] 0 containers: []
	W1225 19:03:02.937775  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:02.937783  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:02.937832  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:02.969173  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:02.969196  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:02.969202  260034 cri.go:96] found id: ""
	I1225 19:03:02.969211  260034 logs.go:282] 2 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:02.969268  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.973335  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:02.978265  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:02.978337  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:03.012483  260034 cri.go:96] found id: ""
	I1225 19:03:03.012516  260034 logs.go:282] 0 containers: []
	W1225 19:03:03.012529  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:03.012538  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:03.012604  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:03.047542  260034 cri.go:96] found id: ""
	I1225 19:03:03.047569  260034 logs.go:282] 0 containers: []
	W1225 19:03:03.047579  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:03.047589  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:03:03.047610  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:03.080556  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:03.080581  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:03.118105  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:03.118131  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:03.147128  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:03:03.147153  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:03.178254  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:03.178281  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:03.213339  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:03.213363  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:03.243438  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:03.243464  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:03.272471  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:03.272500  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:03.328034  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:03.328064  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:03.364639  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:03.364667  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:03.457026  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:03.457060  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:03.471887  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:03.471925  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:03.543772  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:06.045368  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:03.457599  290541 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:03:03.457890  290541 start.go:159] libmachine.API.Create for "default-k8s-diff-port-960022" (driver="docker")
	I1225 19:03:03.457951  290541 client.go:173] LocalClient.Create starting
	I1225 19:03:03.458033  290541 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:03:03.458082  290541 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:03.458110  290541 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:03.458183  290541 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:03:03.458222  290541 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:03.458239  290541 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:03.458697  290541 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:03:03.478346  290541 cli_runner.go:211] docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:03:03.478429  290541 network_create.go:284] running [docker network inspect default-k8s-diff-port-960022] to gather additional debugging logs...
	I1225 19:03:03.478453  290541 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022
	W1225 19:03:03.498977  290541 cli_runner.go:211] docker network inspect default-k8s-diff-port-960022 returned with exit code 1
	I1225 19:03:03.499029  290541 network_create.go:287] error running [docker network inspect default-k8s-diff-port-960022]: docker network inspect default-k8s-diff-port-960022: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-960022 not found
	I1225 19:03:03.499046  290541 network_create.go:289] output of [docker network inspect default-k8s-diff-port-960022]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-960022 not found
	
	** /stderr **
	I1225 19:03:03.499185  290541 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:03.519019  290541 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:03:03.519988  290541 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:03:03.520982  290541 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:03:03.521987  290541 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5ae0820826f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:16:14:1f:73:da} reservation:<nil>}
	I1225 19:03:03.522949  290541 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7fdcf6cdd30d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:90:74:93:c0:40} reservation:<nil>}
	I1225 19:03:03.523500  290541 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-f22c9f3db53f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:42:11:3a:34:ba:a9} reservation:<nil>}
	I1225 19:03:03.524949  290541 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f32d00}
	I1225 19:03:03.524992  290541 network_create.go:124] attempt to create docker network default-k8s-diff-port-960022 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1225 19:03:03.525055  290541 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 default-k8s-diff-port-960022
	I1225 19:03:03.579500  290541 network_create.go:108] docker network default-k8s-diff-port-960022 192.168.103.0/24 created
	I1225 19:03:03.579533  290541 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-960022" container
	I1225 19:03:03.579596  290541 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:03:03.598187  290541 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-960022 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:03:03.617904  290541 oci.go:103] Successfully created a docker volume default-k8s-diff-port-960022
	I1225 19:03:03.617974  290541 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-960022-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --entrypoint /usr/bin/test -v default-k8s-diff-port-960022:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:03:04.030742  290541 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-960022
	I1225 19:03:04.030817  290541 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:04.030833  290541 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:03:04.030928  290541 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-960022:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:03:07.889130  290541 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-960022:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.85814146s)
	I1225 19:03:07.889167  290541 kic.go:203] duration metric: took 3.858330464s to extract preloaded images to volume ...
	W1225 19:03:07.889258  290541 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:03:07.889302  290541 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:03:07.889350  290541 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:03:07.945593  290541 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-960022 --name default-k8s-diff-port-960022 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-960022 --network default-k8s-diff-port-960022 --ip 192.168.103.2 --volume default-k8s-diff-port-960022:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 19:03:08.221159  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Running}}
	I1225 19:03:08.238995  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:08.259084  290541 cli_runner.go:164] Run: docker exec default-k8s-diff-port-960022 stat /var/lib/dpkg/alternatives/iptables
	W1225 19:03:06.164061  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:08.164506  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:08.305099  290541 oci.go:144] the created container "default-k8s-diff-port-960022" has a running status.
	I1225 19:03:08.305135  290541 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa...
	I1225 19:03:08.458115  290541 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1225 19:03:08.487974  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:08.506555  290541 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1225 19:03:08.506576  290541 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-960022 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1225 19:03:08.556659  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:08.575407  290541 machine.go:94] provisionDockerMachine start ...
	I1225 19:03:08.575484  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:08.598509  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:08.598911  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:08.598937  290541 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:03:08.727165  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:03:08.727197  290541 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-960022"
	I1225 19:03:08.727268  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:08.746606  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:08.746933  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:08.746960  290541 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-960022 && echo "default-k8s-diff-port-960022" | sudo tee /etc/hostname
	I1225 19:03:08.881101  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:03:08.881206  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:08.903866  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:08.904122  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:08.904145  290541 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-960022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-960022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-960022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:03:09.027579  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:03:09.027625  290541 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:03:09.027665  290541 ubuntu.go:190] setting up certificates
	I1225 19:03:09.027678  290541 provision.go:84] configureAuth start
	I1225 19:03:09.027764  290541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:03:09.044970  290541 provision.go:143] copyHostCerts
	I1225 19:03:09.045033  290541 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:03:09.045044  290541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:03:09.045123  290541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:03:09.045226  290541 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:03:09.045235  290541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:03:09.045261  290541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:03:09.045328  290541 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:03:09.045335  290541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:03:09.045358  290541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:03:09.045888  290541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-960022 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-960022 localhost minikube]
	I1225 19:03:09.092526  290541 provision.go:177] copyRemoteCerts
	I1225 19:03:09.092585  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:03:09.092617  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.109947  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.202295  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1225 19:03:09.221345  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:03:09.238628  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:03:09.256546  290541 provision.go:87] duration metric: took 228.857085ms to configureAuth
	I1225 19:03:09.256572  290541 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:03:09.256741  290541 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:09.256845  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.275421  290541 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:09.275621  290541 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1225 19:03:09.275637  290541 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:03:09.532278  290541 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:03:09.532307  290541 machine.go:97] duration metric: took 956.878726ms to provisionDockerMachine
	I1225 19:03:09.532318  290541 client.go:176] duration metric: took 6.074358023s to LocalClient.Create
	I1225 19:03:09.532337  290541 start.go:167] duration metric: took 6.074448934s to libmachine.API.Create "default-k8s-diff-port-960022"
	I1225 19:03:09.532343  290541 start.go:293] postStartSetup for "default-k8s-diff-port-960022" (driver="docker")
	I1225 19:03:09.532354  290541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:03:09.532419  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:03:09.532467  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.550263  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.642784  290541 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:03:09.646344  290541 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:03:09.646366  290541 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:03:09.646376  290541 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:03:09.646430  290541 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:03:09.646539  290541 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:03:09.646661  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:03:09.654261  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:03:09.674002  290541 start.go:296] duration metric: took 141.645847ms for postStartSetup
	I1225 19:03:09.674392  290541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:03:09.691409  290541 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:03:09.691669  290541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:03:09.691735  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.709864  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.797890  290541 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:03:09.802308  290541 start.go:128] duration metric: took 6.34665946s to createHost
	I1225 19:03:09.802331  290541 start.go:83] releasing machines lock for "default-k8s-diff-port-960022", held for 6.346794686s
	I1225 19:03:09.802417  290541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:03:09.820182  290541 ssh_runner.go:195] Run: cat /version.json
	I1225 19:03:09.820242  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.820250  290541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:03:09.820310  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:09.838443  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.838779  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:09.984002  290541 ssh_runner.go:195] Run: systemctl --version
	I1225 19:03:09.990474  290541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:03:10.025389  290541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:03:10.030215  290541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:03:10.030278  290541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:03:10.055379  290541 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 19:03:10.055398  290541 start.go:496] detecting cgroup driver to use...
	I1225 19:03:10.055428  290541 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:03:10.055477  290541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:03:10.071670  290541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:03:10.084033  290541 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:03:10.084084  290541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:03:10.100284  290541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:03:10.118126  290541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:03:10.204379  290541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:03:10.298103  290541 docker.go:234] disabling docker service ...
	I1225 19:03:10.298179  290541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:03:10.318426  290541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:03:10.331202  290541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:03:10.418713  290541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:03:10.508858  290541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:03:10.521817  290541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:03:10.536454  290541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:03:10.536505  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.546955  290541 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:03:10.547041  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.555738  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.564495  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.573237  290541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:03:10.581102  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.589368  290541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.602102  290541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:10.610491  290541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:03:10.617558  290541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:03:10.624678  290541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:10.707455  290541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:03:10.848512  290541 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:03:10.848583  290541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:03:10.852823  290541 start.go:574] Will wait 60s for crictl version
	I1225 19:03:10.852874  290541 ssh_runner.go:195] Run: which crictl
	I1225 19:03:10.856546  290541 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:03:10.882737  290541 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:03:10.882811  290541 ssh_runner.go:195] Run: crio --version
	I1225 19:03:10.908556  290541 ssh_runner.go:195] Run: crio --version
	I1225 19:03:10.936701  290541 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1225 19:03:08.252388  281279 pod_ready.go:104] pod "coredns-7d764666f9-lqvms" is not "Ready", error: <nil>
	I1225 19:03:10.253970  281279 pod_ready.go:94] pod "coredns-7d764666f9-lqvms" is "Ready"
	I1225 19:03:10.254003  281279 pod_ready.go:86] duration metric: took 37.507239153s for pod "coredns-7d764666f9-lqvms" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.256491  281279 pod_ready.go:83] waiting for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.270319  281279 pod_ready.go:94] pod "etcd-no-preload-148352" is "Ready"
	I1225 19:03:10.270349  281279 pod_ready.go:86] duration metric: took 13.833526ms for pod "etcd-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.357545  281279 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.362136  281279 pod_ready.go:94] pod "kube-apiserver-no-preload-148352" is "Ready"
	I1225 19:03:10.362165  281279 pod_ready.go:86] duration metric: took 4.592851ms for pod "kube-apiserver-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.364693  281279 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.453477  281279 pod_ready.go:94] pod "kube-controller-manager-no-preload-148352" is "Ready"
	I1225 19:03:10.453527  281279 pod_ready.go:86] duration metric: took 88.778375ms for pod "kube-controller-manager-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:10.655405  281279 pod_ready.go:83] waiting for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.050821  281279 pod_ready.go:94] pod "kube-proxy-j2p4x" is "Ready"
	I1225 19:03:11.050848  281279 pod_ready.go:86] duration metric: took 395.411494ms for pod "kube-proxy-j2p4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.251357  281279 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.650977  281279 pod_ready.go:94] pod "kube-scheduler-no-preload-148352" is "Ready"
	I1225 19:03:11.650999  281279 pod_ready.go:86] duration metric: took 399.61097ms for pod "kube-scheduler-no-preload-148352" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:11.651010  281279 pod_ready.go:40] duration metric: took 38.907995238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:03:11.698020  281279 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:03:11.700129  281279 out.go:179] * Done! kubectl is now configured to use "no-preload-148352" cluster and "default" namespace by default
	I1225 19:03:10.937944  290541 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:10.955707  290541 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1225 19:03:10.959652  290541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:03:10.969859  290541 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:03:10.970004  290541 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:10.970067  290541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:03:11.000927  290541 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:03:11.000960  290541 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:03:11.001017  290541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:03:11.025392  290541 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:03:11.025414  290541 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:03:11.025425  290541 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1225 19:03:11.025513  290541 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-960022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:03:11.025590  290541 ssh_runner.go:195] Run: crio config
	I1225 19:03:11.076734  290541 cni.go:84] Creating CNI manager for ""
	I1225 19:03:11.076756  290541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:11.076773  290541 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:03:11.076802  290541 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-960022 NodeName:default-k8s-diff-port-960022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:03:11.076995  290541 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-960022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:03:11.077093  290541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:03:11.086253  290541 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:03:11.086316  290541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:03:11.095352  290541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1225 19:03:11.110084  290541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:03:11.128766  290541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1225 19:03:11.141591  290541 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:03:11.145207  290541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:03:11.156484  290541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:11.252998  290541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:03:11.278658  290541 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022 for IP: 192.168.103.2
	I1225 19:03:11.278680  290541 certs.go:195] generating shared ca certs ...
	I1225 19:03:11.278706  290541 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.279070  290541 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:03:11.279143  290541 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:03:11.279160  290541 certs.go:257] generating profile certs ...
	I1225 19:03:11.279236  290541 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key
	I1225 19:03:11.279251  290541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.crt with IP's: []
	I1225 19:03:11.311270  290541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.crt ...
	I1225 19:03:11.311306  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.crt: {Name:mk32536f2e89a3eda9585f7095b2d94b4d0d92fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.311516  290541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key ...
	I1225 19:03:11.311537  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key: {Name:mk9b6414010a81635dab73577843147d7842ae32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.311696  290541 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c
	I1225 19:03:11.311722  290541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1225 19:03:11.378381  290541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c ...
	I1225 19:03:11.378405  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c: {Name:mk0de737dcfd45542b929ddc2fcb19b22cc1d79d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.378580  290541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c ...
	I1225 19:03:11.378597  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c: {Name:mkb082fb82d4aa0c55c71dc96dfbcbbd4a1f57b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.378703  290541 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt.a3ef6c0c -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt
	I1225 19:03:11.378790  290541 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key
	I1225 19:03:11.378874  290541 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key
	I1225 19:03:11.378912  290541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt with IP's: []
	I1225 19:03:11.435262  290541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt ...
	I1225 19:03:11.435289  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt: {Name:mk957cdcdb598703fddf6148360e81b85418c70a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.435458  290541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key ...
	I1225 19:03:11.435479  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key: {Name:mk14ca2d78a55c3fdc968bd5cd9741d839de08ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:11.435696  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:03:11.435745  290541 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:03:11.435762  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:03:11.435799  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:03:11.435834  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:03:11.435868  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:03:11.435941  290541 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:03:11.436682  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:03:11.457965  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:03:11.476251  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:03:11.494329  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:03:11.511365  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1225 19:03:11.529885  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:03:11.548550  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:03:11.565493  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:03:11.582435  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:03:11.600828  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:03:11.618667  290541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:03:11.636460  290541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:03:11.649422  290541 ssh_runner.go:195] Run: openssl version
	I1225 19:03:11.656415  290541 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.665217  290541 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:03:11.674289  290541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.678053  290541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.678108  290541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:03:11.717182  290541 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:03:11.726661  290541 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91122.pem /etc/ssl/certs/3ec20f2e.0
	I1225 19:03:11.735492  290541 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.742735  290541 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:03:11.750048  290541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.754134  290541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.754185  290541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:11.794412  290541 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:03:11.803581  290541 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 19:03:11.812000  290541 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.821162  290541 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:03:11.829185  290541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.833291  290541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.833342  290541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:03:11.868441  290541 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:03:11.875949  290541 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9112.pem /etc/ssl/certs/51391683.0
	I1225 19:03:11.883260  290541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:03:11.886717  290541 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 19:03:11.886773  290541 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:11.886857  290541 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:03:11.886922  290541 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:03:11.913305  290541 cri.go:96] found id: ""
	I1225 19:03:11.913362  290541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:03:11.921550  290541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:03:11.929740  290541 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 19:03:11.929783  290541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:03:11.937471  290541 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 19:03:11.937496  290541 kubeadm.go:158] found existing configuration files:
	
	I1225 19:03:11.937536  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1225 19:03:11.944801  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 19:03:11.944840  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 19:03:11.953174  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1225 19:03:11.960603  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 19:03:11.960649  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 19:03:11.969760  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1225 19:03:11.978412  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 19:03:11.978467  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:03:11.986927  290541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1225 19:03:11.995043  290541 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 19:03:11.995105  290541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:03:12.003077  290541 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 19:03:12.042697  290541 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1225 19:03:12.042775  290541 kubeadm.go:319] [preflight] Running pre-flight checks
	I1225 19:03:12.075292  290541 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1225 19:03:12.075403  290541 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1225 19:03:12.075476  290541 kubeadm.go:319] OS: Linux
	I1225 19:03:12.075561  290541 kubeadm.go:319] CGROUPS_CPU: enabled
	I1225 19:03:12.075626  290541 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1225 19:03:12.075710  290541 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1225 19:03:12.075786  290541 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1225 19:03:12.075879  290541 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1225 19:03:12.075984  290541 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1225 19:03:12.076060  290541 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1225 19:03:12.076126  290541 kubeadm.go:319] CGROUPS_IO: enabled
	I1225 19:03:12.137914  290541 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 19:03:12.138081  290541 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 19:03:12.138228  290541 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1225 19:03:12.146676  290541 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 19:03:11.047991  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:03:11.048064  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:11.048128  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:11.078352  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:11.078375  260034 cri.go:96] found id: "6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:11.078381  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:11.078386  260034 cri.go:96] found id: ""
	I1225 19:03:11.078394  260034 logs.go:282] 3 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:11.078452  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.082676  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.086760  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.090483  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:11.090541  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:11.121886  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:11.121925  260034 cri.go:96] found id: ""
	I1225 19:03:11.121936  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:11.121995  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.126770  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:11.126850  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:11.154969  260034 cri.go:96] found id: ""
	I1225 19:03:11.154993  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.155004  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:11.155011  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:11.155069  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:11.187513  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:11.187537  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:11.187542  260034 cri.go:96] found id: ""
	I1225 19:03:11.187552  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:11.187623  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.193142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.199845  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:11.199935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:11.229683  260034 cri.go:96] found id: ""
	I1225 19:03:11.229706  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.229714  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:11.229718  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:11.229763  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:11.256771  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:11.256791  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:11.256799  260034 cri.go:96] found id: ""
	I1225 19:03:11.256806  260034 logs.go:282] 2 containers: [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:11.256855  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.260853  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:11.264338  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:11.264393  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:11.295945  260034 cri.go:96] found id: ""
	I1225 19:03:11.295967  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.295975  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:11.295980  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:11.296032  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:11.326722  260034 cri.go:96] found id: ""
	I1225 19:03:11.326746  260034 logs.go:282] 0 containers: []
	W1225 19:03:11.326757  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:11.326767  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:11.326780  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:11.377016  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:11.377049  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:11.407203  260034 logs.go:123] Gathering logs for kube-apiserver [6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123] ...
	I1225 19:03:11.407231  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6286b997d7e536f57739dca40618206bd2111cd0ea9142bc6f4203ad7d126123"
	I1225 19:03:11.438636  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:11.438661  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:11.466434  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:03:11.466461  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:11.492372  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:11.492398  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:11.523343  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:11.523370  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:11.603534  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:11.603561  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:11.617107  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:11.617133  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 19:03:12.148720  290541 out.go:252]   - Generating certificates and keys ...
	I1225 19:03:12.148820  290541 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1225 19:03:12.148941  290541 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1225 19:03:12.492963  290541 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1225 19:03:12.589526  290541 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1225 19:03:12.781681  290541 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1225 19:03:12.911174  290541 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1225 19:03:10.663757  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:13.163492  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:13.287408  290541 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1225 19:03:13.287535  290541 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-960022 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1225 19:03:13.591464  290541 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1225 19:03:13.591721  290541 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-960022 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1225 19:03:13.735044  290541 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 19:03:14.135286  290541 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 19:03:14.247035  290541 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1225 19:03:14.247172  290541 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 19:03:14.380656  290541 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 19:03:14.587167  290541 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1225 19:03:14.666880  290541 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 19:03:14.881639  290541 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 19:03:15.249231  290541 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 19:03:15.249827  290541 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 19:03:15.253689  290541 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 19:03:15.256473  290541 out.go:252]   - Booting up control plane ...
	I1225 19:03:15.256600  290541 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 19:03:15.256710  290541 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 19:03:15.257363  290541 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 19:03:15.270872  290541 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 19:03:15.271019  290541 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 19:03:15.277503  290541 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 19:03:15.277804  290541 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 19:03:15.277867  290541 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 19:03:15.382073  290541 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 19:03:15.382235  290541 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 19:03:15.883858  290541 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.839176ms
	I1225 19:03:15.886634  290541 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 19:03:15.886785  290541 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1225 19:03:15.886884  290541 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 19:03:15.887000  290541 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1225 19:03:17.525448  290541 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.638650065s
	I1225 19:03:17.662016  290541 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.775331698s
	W1225 19:03:15.164048  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	W1225 19:03:17.164424  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:19.388073  290541 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501352532s
	I1225 19:03:19.406371  290541 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 19:03:19.415951  290541 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 19:03:19.425057  290541 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 19:03:19.425375  290541 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-960022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 19:03:19.432994  290541 kubeadm.go:319] [bootstrap-token] Using token: dqiqgc.7rvz0zi3i4hgo1bx
	I1225 19:03:19.434207  290541 out.go:252]   - Configuring RBAC rules ...
	I1225 19:03:19.434361  290541 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 19:03:19.437220  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 19:03:19.441812  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 19:03:19.444110  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 19:03:19.447043  290541 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 19:03:19.449197  290541 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 19:03:19.797886  290541 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 19:03:20.209580  290541 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1225 19:03:20.796262  290541 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1225 19:03:20.797070  290541 kubeadm.go:319] 
	I1225 19:03:20.797174  290541 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1225 19:03:20.797193  290541 kubeadm.go:319] 
	I1225 19:03:20.797285  290541 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1225 19:03:20.797295  290541 kubeadm.go:319] 
	I1225 19:03:20.797331  290541 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1225 19:03:20.797402  290541 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 19:03:20.797473  290541 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 19:03:20.797492  290541 kubeadm.go:319] 
	I1225 19:03:20.797591  290541 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1225 19:03:20.797608  290541 kubeadm.go:319] 
	I1225 19:03:20.797671  290541 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 19:03:20.797681  290541 kubeadm.go:319] 
	I1225 19:03:20.797764  290541 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1225 19:03:20.797877  290541 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 19:03:20.797994  290541 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 19:03:20.798008  290541 kubeadm.go:319] 
	I1225 19:03:20.798138  290541 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 19:03:20.798263  290541 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1225 19:03:20.798274  290541 kubeadm.go:319] 
	I1225 19:03:20.798394  290541 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token dqiqgc.7rvz0zi3i4hgo1bx \
	I1225 19:03:20.798536  290541 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 \
	I1225 19:03:20.798569  290541 kubeadm.go:319] 	--control-plane 
	I1225 19:03:20.798582  290541 kubeadm.go:319] 
	I1225 19:03:20.798693  290541 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1225 19:03:20.798700  290541 kubeadm.go:319] 
	I1225 19:03:20.798773  290541 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token dqiqgc.7rvz0zi3i4hgo1bx \
	I1225 19:03:20.798877  290541 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 
	I1225 19:03:20.801807  290541 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 19:03:20.801946  290541 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 19:03:20.801992  290541 cni.go:84] Creating CNI manager for ""
	I1225 19:03:20.802005  290541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:20.804040  290541 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1225 19:03:21.679630  260034 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062480115s)
	W1225 19:03:21.679674  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1225 19:03:21.679685  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:21.679703  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:21.721645  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:21.721682  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:21.755144  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:21.755179  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:21.783461  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:21.783485  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	W1225 19:03:19.665295  283722 pod_ready.go:104] pod "coredns-66bc5c9577-n4nqj" is not "Ready", error: <nil>
	I1225 19:03:21.663961  283722 pod_ready.go:94] pod "coredns-66bc5c9577-n4nqj" is "Ready"
	I1225 19:03:21.663995  283722 pod_ready.go:86] duration metric: took 35.505500978s for pod "coredns-66bc5c9577-n4nqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.666425  283722 pod_ready.go:83] waiting for pod "etcd-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.670402  283722 pod_ready.go:94] pod "etcd-embed-certs-684693" is "Ready"
	I1225 19:03:21.670429  283722 pod_ready.go:86] duration metric: took 3.974917ms for pod "etcd-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.672351  283722 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.676345  283722 pod_ready.go:94] pod "kube-apiserver-embed-certs-684693" is "Ready"
	I1225 19:03:21.676369  283722 pod_ready.go:86] duration metric: took 3.998184ms for pod "kube-apiserver-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.678331  283722 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:21.862122  283722 pod_ready.go:94] pod "kube-controller-manager-embed-certs-684693" is "Ready"
	I1225 19:03:21.862153  283722 pod_ready.go:86] duration metric: took 183.798503ms for pod "kube-controller-manager-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:22.062056  283722 pod_ready.go:83] waiting for pod "kube-proxy-wzb26" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:22.461800  283722 pod_ready.go:94] pod "kube-proxy-wzb26" is "Ready"
	I1225 19:03:22.461830  283722 pod_ready.go:86] duration metric: took 399.750088ms for pod "kube-proxy-wzb26" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:22.662801  283722 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:23.062729  283722 pod_ready.go:94] pod "kube-scheduler-embed-certs-684693" is "Ready"
	I1225 19:03:23.062758  283722 pod_ready.go:86] duration metric: took 399.920395ms for pod "kube-scheduler-embed-certs-684693" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:03:23.062772  283722 pod_ready.go:40] duration metric: took 36.908039298s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:03:23.108169  283722 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:03:23.110150  283722 out.go:179] * Done! kubectl is now configured to use "embed-certs-684693" cluster and "default" namespace by default
	I1225 19:03:20.805144  290541 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 19:03:20.809668  290541 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1225 19:03:20.809688  290541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1225 19:03:20.823209  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 19:03:21.032445  290541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 19:03:21.032523  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:21.032561  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-960022 minikube.k8s.io/updated_at=2025_12_25T19_03_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef minikube.k8s.io/name=default-k8s-diff-port-960022 minikube.k8s.io/primary=true
	I1225 19:03:21.127957  290541 ops.go:34] apiserver oom_adj: -16
	I1225 19:03:21.128106  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:21.628946  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:22.129120  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:22.628948  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:23.129126  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:23.628802  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:24.129108  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:24.629105  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:25.128534  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:25.629116  290541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:25.705041  290541 kubeadm.go:1114] duration metric: took 4.672590994s to wait for elevateKubeSystemPrivileges
	I1225 19:03:25.705078  290541 kubeadm.go:403] duration metric: took 13.818308582s to StartCluster
	I1225 19:03:25.705101  290541 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:25.705173  290541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:25.707684  290541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:25.707952  290541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 19:03:25.707983  290541 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:25.708020  290541 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:03:25.708116  290541 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-960022"
	I1225 19:03:25.708153  290541 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-960022"
	I1225 19:03:25.708165  290541 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-960022"
	I1225 19:03:25.708184  290541 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-960022"
	I1225 19:03:25.708194  290541 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:03:25.708207  290541 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:25.708576  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:25.708757  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:25.710353  290541 out.go:179] * Verifying Kubernetes components...
	I1225 19:03:25.711592  290541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:25.738516  290541 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-960022"
	I1225 19:03:25.739025  290541 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:03:25.739252  290541 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:03:25.739542  290541 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:03:25.741329  290541 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:03:25.741352  290541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:03:25.741401  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:25.775854  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:25.777786  290541 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:03:25.777813  290541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:03:25.777870  290541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:03:25.805486  290541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:03:25.821949  290541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 19:03:25.881126  290541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:03:25.901559  290541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:03:25.929322  290541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:03:26.031302  290541 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1225 19:03:26.033114  290541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-960022" to be "Ready" ...
	I1225 19:03:26.254094  290541 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1225 19:03:24.310991  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:24.311381  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:24.311435  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:24.311482  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:24.338559  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:24.338578  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:24.338582  260034 cri.go:96] found id: ""
	I1225 19:03:24.338589  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:24.338643  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.342642  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.346179  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:24.346238  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:24.373923  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:24.373948  260034 cri.go:96] found id: ""
	I1225 19:03:24.373955  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:24.374012  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.378709  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:24.378784  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:24.406944  260034 cri.go:96] found id: ""
	I1225 19:03:24.406970  260034 logs.go:282] 0 containers: []
	W1225 19:03:24.406979  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:24.406986  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:24.407047  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:24.435879  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:24.435926  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:24.435933  260034 cri.go:96] found id: ""
	I1225 19:03:24.435944  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:24.436013  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.440541  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.444199  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:24.444263  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:24.477756  260034 cri.go:96] found id: ""
	I1225 19:03:24.477777  260034 logs.go:282] 0 containers: []
	W1225 19:03:24.477788  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:24.477796  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:24.477846  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:24.508049  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:24.508075  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:24.508081  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:24.508086  260034 cri.go:96] found id: ""
	I1225 19:03:24.508096  260034 logs.go:282] 3 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:24.508157  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.512378  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.516507  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:24.520909  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:24.520967  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:24.567685  260034 cri.go:96] found id: ""
	I1225 19:03:24.567723  260034 logs.go:282] 0 containers: []
	W1225 19:03:24.567734  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:24.567740  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:24.567818  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:24.597338  260034 cri.go:96] found id: ""
	I1225 19:03:24.597368  260034 logs.go:282] 0 containers: []
	W1225 19:03:24.597379  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:24.597390  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:24.597405  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:24.627519  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:24.627546  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:24.687714  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:24.687746  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:24.702742  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:24.702769  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:24.735304  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:24.735333  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:24.772932  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:03:24.772970  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:24.800857  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:03:24.800886  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:24.829539  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:24.829573  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:24.863230  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:24.863265  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:24.946336  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:24.946371  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:25.008649  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:25.008675  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:25.008690  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:25.044281  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:25.044312  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:25.072043  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:25.072068  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:26.255568  290541 addons.go:530] duration metric: took 547.547547ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1225 19:03:26.537099  290541 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-960022" context rescaled to 1 replicas
	W1225 19:03:28.036233  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 25 19:02:49 no-preload-148352 crio[569]: time="2025-12-25T19:02:49.910258809Z" level=info msg="Started container" PID=1770 containerID=af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper id=5dfa393c-af4e-41fa-ba19-e321b2cf219e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c803bd78efb4feff7779659502bc08b89d9f59fc19fe72a7696ebc331d76a452
	Dec 25 19:02:49 no-preload-148352 crio[569]: time="2025-12-25T19:02:49.941708482Z" level=info msg="Removing container: 27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d" id=66a01e73-8280-4727-9c95-bd3e72c02d04 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:02:49 no-preload-148352 crio[569]: time="2025-12-25T19:02:49.955957866Z" level=info msg="Removed container 27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=66a01e73-8280-4727-9c95-bd3e72c02d04 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.976934133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a5075958-f5c4-460e-8275-ee2732d1ec9a name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.977990889Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9feeff14-1b9f-4fb3-a009-58c929da05f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.979406005Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=60f812f4-69ef-440f-93cd-52e3e5706096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.979545717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984405264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984598987Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ea9b372ddef8f2be5018569109579d14a1239fecdc7517dfbea98c7d671f819c/merged/etc/passwd: no such file or directory"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984635096Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ea9b372ddef8f2be5018569109579d14a1239fecdc7517dfbea98c7d671f819c/merged/etc/group: no such file or directory"
	Dec 25 19:03:02 no-preload-148352 crio[569]: time="2025-12-25T19:03:02.984948099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:03 no-preload-148352 crio[569]: time="2025-12-25T19:03:03.0170308Z" level=info msg="Created container a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43: kube-system/storage-provisioner/storage-provisioner" id=60f812f4-69ef-440f-93cd-52e3e5706096 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:03 no-preload-148352 crio[569]: time="2025-12-25T19:03:03.017676026Z" level=info msg="Starting container: a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43" id=d9c248bc-a0c9-4243-a0fc-b9a0cfcee170 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:03 no-preload-148352 crio[569]: time="2025-12-25T19:03:03.019721168Z" level=info msg="Started container" PID=1784 containerID=a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43 description=kube-system/storage-provisioner/storage-provisioner id=d9c248bc-a0c9-4243-a0fc-b9a0cfcee170 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e58ed4bd2a110888dfd64ef40bb73272e399aa93aa49ce5ad9e1a2920905b380
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.856765053Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fea4ee6e-52d3-4c81-bb9e-d6549a139f24 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.858037102Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f21fe86d-1627-4aca-b9fc-92b3f91e7829 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.859081162Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=f649d2df-bef6-48a4-9c1b-138fef61e68b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.859234347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.865986801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.866436887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.893012211Z" level=info msg="Created container 4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=f649d2df-bef6-48a4-9c1b-138fef61e68b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.893679125Z" level=info msg="Starting container: 4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b" id=9b37b4ef-2b84-41c0-b815-707ca6109487 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:16 no-preload-148352 crio[569]: time="2025-12-25T19:03:16.896039791Z" level=info msg="Started container" PID=1820 containerID=4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper id=9b37b4ef-2b84-41c0-b815-707ca6109487 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c803bd78efb4feff7779659502bc08b89d9f59fc19fe72a7696ebc331d76a452
	Dec 25 19:03:17 no-preload-148352 crio[569]: time="2025-12-25T19:03:17.014368721Z" level=info msg="Removing container: af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42" id=8f11160c-d919-426d-9f24-e7eefaa16086 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:17 no-preload-148352 crio[569]: time="2025-12-25T19:03:17.025297352Z" level=info msg="Removed container af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd/dashboard-metrics-scraper" id=8f11160c-d919-426d-9f24-e7eefaa16086 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4d94c5064f594       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   c803bd78efb4f       dashboard-metrics-scraper-867fb5f87b-gbfkd   kubernetes-dashboard
	a92c9aa96d754       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   e58ed4bd2a110       storage-provisioner                          kube-system
	901f76356987e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   26c6f721380f5       kubernetes-dashboard-b84665fb8-5ngsn         kubernetes-dashboard
	58b0d8852ca6e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   f88a7476501fc       busybox                                      default
	cd48b0389f086       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   116d413a07191       coredns-7d764666f9-lqvms                     kube-system
	e3e24c594c2e9       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   1c154aad786b6       kindnet-jx25d                                kube-system
	cc74c6a68e0e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   e58ed4bd2a110       storage-provisioner                          kube-system
	55f12125d0d2e       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           56 seconds ago      Running             kube-proxy                  0                   7e635c0ff1e6d       kube-proxy-j2p4x                             kube-system
	2f3a4cbe6949d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           59 seconds ago      Running             kube-controller-manager     0                   4036767cc992f       kube-controller-manager-no-preload-148352    kube-system
	bb2011f8a3910       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           59 seconds ago      Running             etcd                        0                   b8df7d7974bf9       etcd-no-preload-148352                       kube-system
	47366819032b3       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           59 seconds ago      Running             kube-apiserver              0                   afce916cd6f23       kube-apiserver-no-preload-148352             kube-system
	aa7daa7b6db66       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           59 seconds ago      Running             kube-scheduler              0                   14060525355de       kube-scheduler-no-preload-148352             kube-system
	
	
	==> coredns [cd48b0389f0865406b664205dcf7168f2c40b064af72c3b306f1eaf26e9b9128] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48013 - 64132 "HINFO IN 6549156959973943132.7448233143510595798. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029508599s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-148352
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-148352
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=no-preload-148352
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_01_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:01:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-148352
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:01:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:03:02 +0000   Thu, 25 Dec 2025 19:02:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-148352
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                de63609a-6f51-4a32-ad70-d0138650b5f8
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-lqvms                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-148352                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-jx25d                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-148352              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-148352     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-j2p4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-148352              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-gbfkd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-5ngsn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-148352 event: Registered Node no-preload-148352 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node no-preload-148352 event: Registered Node no-preload-148352 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [bb2011f8a39109b797fb7b1bf01cff317738a18c03f9c14941817a74f2e323b6] <==
	{"level":"info","ts":"2025-12-25T19:02:29.435282Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-25T19:02:29.435303Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-25T19:02:29.435369Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:02:29.435409Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:02:29.435419Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:02:29.436091Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:02:29.436121Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:02:30.322204Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-25T19:02:30.322253Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:02:30.322329Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-25T19:02:30.322342Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:02:30.322359Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323161Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323175Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:02:30.323188Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323199Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:02:30.323814Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-148352 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:02:30.323821Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:02:30.323854Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:02:30.324132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:02:30.324221Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:02:30.325652Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:02:30.325864Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:02:30.328745Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-25T19:02:30.328821Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:03:29 up 45 min,  0 user,  load average: 2.99, 2.53, 1.82
	Linux no-preload-148352 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3e24c594c2e90a5b96c7c7292be2263392feb5e70d00b1ec00eb84d2a0fbf17] <==
	I1225 19:02:32.464934       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:02:32.465232       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1225 19:02:32.465384       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:02:32.465409       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:02:32.465440       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:02:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:02:32.665131       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:02:32.665177       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:02:32.665191       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:02:32.665569       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:02:32.965382       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:02:32.965410       1 metrics.go:72] Registering metrics
	I1225 19:02:32.965465       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:42.665040       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:02:42.665125       1 main.go:301] handling current node
	I1225 19:02:52.665540       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:02:52.665574       1 main.go:301] handling current node
	I1225 19:03:02.665307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:03:02.665340       1 main.go:301] handling current node
	I1225 19:03:12.666191       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:03:12.666240       1 main.go:301] handling current node
	I1225 19:03:22.666085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1225 19:03:22.666149       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47366819032b30036912ff5f63dfa944e254928f33476aba04aaf69af88aaf71] <==
	I1225 19:02:31.250416       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1225 19:02:31.250655       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:31.251082       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:02:31.251293       1 aggregator.go:187] initial CRD sync complete...
	I1225 19:02:31.251308       1 autoregister_controller.go:144] Starting autoregister controller
	I1225 19:02:31.251315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:02:31.251320       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:02:31.251463       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1225 19:02:31.251484       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:02:31.251590       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:31.256275       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1225 19:02:31.257446       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1225 19:02:31.263121       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:02:31.278553       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:02:31.511216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:02:31.538659       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:02:31.555647       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:02:31.561985       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:02:31.567853       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:02:31.600579       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.117.81"}
	I1225 19:02:31.611464       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.85.114"}
	I1225 19:02:32.157204       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1225 19:02:34.776435       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:02:34.824840       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:02:35.024250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2f3a4cbe6949d2645c6993b4cc7109abf638d7d4a738d0209ae98d0d57e87c1b] <==
	I1225 19:02:34.379108       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.379439       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1225 19:02:34.379426       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1225 19:02:34.380004       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:34.380010       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380061       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380110       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380129       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380152       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380212       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380275       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380293       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.380463       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.382305       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.383150       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.385214       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:34.387963       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.387981       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.387989       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.388014       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.401416       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.479760       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:34.479780       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1225 19:02:34.479784       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1225 19:02:34.485374       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [55f12125d0d2e0b7f466cdebd8a8770b9c7062b5f540d2dcaf8cca748d880059] <==
	I1225 19:02:32.257801       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:02:32.329041       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:32.429475       1 shared_informer.go:377] "Caches are synced"
	I1225 19:02:32.429509       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1225 19:02:32.429615       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:02:32.447868       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:02:32.447958       1 server_linux.go:136] "Using iptables Proxier"
	I1225 19:02:32.453260       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:02:32.453566       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1225 19:02:32.453592       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:32.455613       1 config.go:309] "Starting node config controller"
	I1225 19:02:32.455772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:02:32.455806       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:02:32.455914       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:02:32.455954       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:02:32.455916       1 config.go:200] "Starting service config controller"
	I1225 19:02:32.456032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:02:32.455928       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:02:32.456092       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:02:32.556416       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:02:32.556429       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 19:02:32.556445       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [aa7daa7b6db664c65cb970f6372118ff3edf3e9ed558da28a08f0e134f753051] <==
	I1225 19:02:29.750975       1 serving.go:386] Generated self-signed cert in-memory
	W1225 19:02:31.178961       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 19:02:31.178999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 19:02:31.179011       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 19:02:31.179021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 19:02:31.208364       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1225 19:02:31.208414       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:31.214502       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:31.214541       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:02:31.214655       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:02:31.215504       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:02:31.315354       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 25 19:02:48 no-preload-148352 kubelet[724]: E1225 19:02:48.934774     724 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-148352" containerName="etcd"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: E1225 19:02:49.857058     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: I1225 19:02:49.857127     724 scope.go:122] "RemoveContainer" containerID="27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: I1225 19:02:49.939698     724 scope.go:122] "RemoveContainer" containerID="27fa53c998eaf22e08f73724aba07761b5843089747743aa04a69356a323b28d"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: E1225 19:02:49.939984     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: I1225 19:02:49.940017     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:02:49 no-preload-148352 kubelet[724]: E1225 19:02:49.940217     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:02:52 no-preload-148352 kubelet[724]: E1225 19:02:52.304142     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:02:52 no-preload-148352 kubelet[724]: I1225 19:02:52.304191     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:02:52 no-preload-148352 kubelet[724]: E1225 19:02:52.304411     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:03:02 no-preload-148352 kubelet[724]: I1225 19:03:02.976417     724 scope.go:122] "RemoveContainer" containerID="cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc"
	Dec 25 19:03:10 no-preload-148352 kubelet[724]: E1225 19:03:10.230249     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-lqvms" containerName="coredns"
	Dec 25 19:03:16 no-preload-148352 kubelet[724]: E1225 19:03:16.856123     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:03:16 no-preload-148352 kubelet[724]: I1225 19:03:16.856176     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: I1225 19:03:17.013049     724 scope.go:122] "RemoveContainer" containerID="af562d65ffa9a9c4b367299de55b10857f967e0f6508713db23f5acea7888a42"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: E1225 19:03:17.013281     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: I1225 19:03:17.013319     724 scope.go:122] "RemoveContainer" containerID="4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	Dec 25 19:03:17 no-preload-148352 kubelet[724]: E1225 19:03:17.013521     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:03:22 no-preload-148352 kubelet[724]: E1225 19:03:22.304419     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" containerName="dashboard-metrics-scraper"
	Dec 25 19:03:22 no-preload-148352 kubelet[724]: I1225 19:03:22.304460     724 scope.go:122] "RemoveContainer" containerID="4d94c5064f5944f34332f4dd87f37ed8394eeca7c7aa67e3c9c70c705f594c8b"
	Dec 25 19:03:22 no-preload-148352 kubelet[724]: E1225 19:03:22.304643     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-gbfkd_kubernetes-dashboard(3a3db07e-732b-41e1-ab00-f60b35e0a14c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-gbfkd" podUID="3a3db07e-732b-41e1-ab00-f60b35e0a14c"
	Dec 25 19:03:23 no-preload-148352 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:03:23 no-preload-148352 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:03:23 no-preload-148352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:03:23 no-preload-148352 systemd[1]: kubelet.service: Consumed 1.773s CPU time.
	
	
	==> kubernetes-dashboard [901f76356987e3e596f87ef92b962ce67c143eef3f37a7b4ac37dbde884cecae] <==
	2025/12/25 19:02:40 Using namespace: kubernetes-dashboard
	2025/12/25 19:02:40 Using in-cluster config to connect to apiserver
	2025/12/25 19:02:40 Using secret token for csrf signing
	2025/12/25 19:02:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:02:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:02:40 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/25 19:02:40 Generating JWE encryption key
	2025/12/25 19:02:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:02:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:02:40 Initializing JWE encryption key from synchronized object
	2025/12/25 19:02:40 Creating in-cluster Sidecar client
	2025/12/25 19:02:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:40 Serving insecurely on HTTP port: 9090
	2025/12/25 19:03:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:40 Starting overwatch
	
	
	==> storage-provisioner [a92c9aa96d75456f5f2159899f86a6e08449c6b8d6c47573dff69a819b4c3e43] <==
	I1225 19:03:03.033937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:03:03.042604       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:03:03.042657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:03:03.044804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:06.499778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:10.760268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:14.359017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:17.413612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:20.436355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:20.440724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:20.440883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:03:20.440949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f3e1ed8-81d0-4039-80b9-a2f1ed9a1f41", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-148352_6f856ba6-fd2f-4dc6-9a66-7f9f70461a64 became leader
	I1225 19:03:20.441037       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-148352_6f856ba6-fd2f-4dc6-9a66-7f9f70461a64!
	W1225 19:03:20.442693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:20.445998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:20.541300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-148352_6f856ba6-fd2f-4dc6-9a66-7f9f70461a64!
	W1225 19:03:22.449492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:22.454378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:24.458502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:24.463715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:26.467886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:26.474783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:28.478458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:28.482450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cc74c6a68e0e6d46d88281d2d099411a95d6a602b396328af5ea78c57473e7dc] <==
	I1225 19:02:32.225172       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:03:02.229535       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-148352 -n no-preload-148352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-148352 -n no-preload-148352: exit status 2 (331.551846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-148352 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-684693 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-684693 --alsologtostderr -v=1: exit status 80 (2.091945898s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-684693 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 19:03:34.897990  297599 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:34.898130  297599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:34.898141  297599 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:34.898149  297599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:34.898439  297599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:34.898746  297599 out.go:368] Setting JSON to false
	I1225 19:03:34.898770  297599 mustload.go:66] Loading cluster: embed-certs-684693
	I1225 19:03:34.899291  297599 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:34.899916  297599 cli_runner.go:164] Run: docker container inspect embed-certs-684693 --format={{.State.Status}}
	I1225 19:03:34.920737  297599 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:03:34.921128  297599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:34.984699  297599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:91 SystemTime:2025-12-25 19:03:34.974641785 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:34.985426  297599 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766570787-22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766570787-22316-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-684693 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1225 19:03:34.987555  297599 out.go:179] * Pausing node embed-certs-684693 ... 
	I1225 19:03:34.988859  297599 host.go:66] Checking if "embed-certs-684693" exists ...
	I1225 19:03:34.989117  297599 ssh_runner.go:195] Run: systemctl --version
	I1225 19:03:34.989151  297599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-684693
	I1225 19:03:35.007724  297599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/embed-certs-684693/id_rsa Username:docker}
	I1225 19:03:35.098190  297599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:35.134644  297599 pause.go:52] kubelet running: true
	I1225 19:03:35.134709  297599 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:35.310985  297599 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:35.311070  297599 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:35.386624  297599 cri.go:96] found id: "0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65"
	I1225 19:03:35.386647  297599 cri.go:96] found id: "e3f10798d2c5cc7aa34b0f7c0769cc5f3bc2ddad54195a5724aa2248050b4d45"
	I1225 19:03:35.386653  297599 cri.go:96] found id: "ea9cdb66e74e5837779af5d99fbae5b1f3b687573b29124b6deecdc991179c3c"
	I1225 19:03:35.386658  297599 cri.go:96] found id: "6be834e877742b8bfa0bc2d501ed6913a2453ae40c561e27beb542006c7d47e6"
	I1225 19:03:35.386663  297599 cri.go:96] found id: "294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b"
	I1225 19:03:35.386668  297599 cri.go:96] found id: "8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818"
	I1225 19:03:35.386672  297599 cri.go:96] found id: "8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d"
	I1225 19:03:35.386677  297599 cri.go:96] found id: "f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238"
	I1225 19:03:35.386681  297599 cri.go:96] found id: "96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d"
	I1225 19:03:35.386692  297599 cri.go:96] found id: "8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	I1225 19:03:35.386700  297599 cri.go:96] found id: "cd0104e7b2433665e7a7678289b4f5de2377208d5e5b7d7a93d384d481448c5f"
	I1225 19:03:35.386705  297599 cri.go:96] found id: ""
	I1225 19:03:35.386752  297599 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:35.398983  297599 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:35Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:03:35.611371  297599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:35.624612  297599 pause.go:52] kubelet running: false
	I1225 19:03:35.624673  297599 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:35.765175  297599 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:35.765269  297599 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:35.835987  297599 cri.go:96] found id: "0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65"
	I1225 19:03:35.836011  297599 cri.go:96] found id: "e3f10798d2c5cc7aa34b0f7c0769cc5f3bc2ddad54195a5724aa2248050b4d45"
	I1225 19:03:35.836018  297599 cri.go:96] found id: "ea9cdb66e74e5837779af5d99fbae5b1f3b687573b29124b6deecdc991179c3c"
	I1225 19:03:35.836022  297599 cri.go:96] found id: "6be834e877742b8bfa0bc2d501ed6913a2453ae40c561e27beb542006c7d47e6"
	I1225 19:03:35.836028  297599 cri.go:96] found id: "294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b"
	I1225 19:03:35.836033  297599 cri.go:96] found id: "8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818"
	I1225 19:03:35.836038  297599 cri.go:96] found id: "8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d"
	I1225 19:03:35.836043  297599 cri.go:96] found id: "f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238"
	I1225 19:03:35.836047  297599 cri.go:96] found id: "96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d"
	I1225 19:03:35.836060  297599 cri.go:96] found id: "8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	I1225 19:03:35.836065  297599 cri.go:96] found id: "cd0104e7b2433665e7a7678289b4f5de2377208d5e5b7d7a93d384d481448c5f"
	I1225 19:03:35.836068  297599 cri.go:96] found id: ""
	I1225 19:03:35.836105  297599 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:36.361110  297599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:03:36.373875  297599 pause.go:52] kubelet running: false
	I1225 19:03:36.373951  297599 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:03:36.516484  297599 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:03:36.516563  297599 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:03:36.583075  297599 cri.go:96] found id: "0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65"
	I1225 19:03:36.583097  297599 cri.go:96] found id: "e3f10798d2c5cc7aa34b0f7c0769cc5f3bc2ddad54195a5724aa2248050b4d45"
	I1225 19:03:36.583101  297599 cri.go:96] found id: "ea9cdb66e74e5837779af5d99fbae5b1f3b687573b29124b6deecdc991179c3c"
	I1225 19:03:36.583105  297599 cri.go:96] found id: "6be834e877742b8bfa0bc2d501ed6913a2453ae40c561e27beb542006c7d47e6"
	I1225 19:03:36.583108  297599 cri.go:96] found id: "294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b"
	I1225 19:03:36.583111  297599 cri.go:96] found id: "8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818"
	I1225 19:03:36.583114  297599 cri.go:96] found id: "8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d"
	I1225 19:03:36.583117  297599 cri.go:96] found id: "f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238"
	I1225 19:03:36.583120  297599 cri.go:96] found id: "96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d"
	I1225 19:03:36.583126  297599 cri.go:96] found id: "8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	I1225 19:03:36.583129  297599 cri.go:96] found id: "cd0104e7b2433665e7a7678289b4f5de2377208d5e5b7d7a93d384d481448c5f"
	I1225 19:03:36.583132  297599 cri.go:96] found id: ""
	I1225 19:03:36.583168  297599 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:03:36.673166  297599 out.go:203] 
	W1225 19:03:36.768651  297599 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 19:03:36.768674  297599 out.go:285] * 
	* 
	W1225 19:03:36.770689  297599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 19:03:36.896497  297599 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-684693 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-684693
helpers_test.go:244: (dbg) docker inspect embed-certs-684693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca",
	        "Created": "2025-12-25T19:01:30.292736794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:02:34.567939086Z",
	            "FinishedAt": "2025-12-25T19:02:33.689971042Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/hosts",
	        "LogPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca-json.log",
	        "Name": "/embed-certs-684693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-684693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-684693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca",
	                "LowerDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-684693",
	                "Source": "/var/lib/docker/volumes/embed-certs-684693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-684693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-684693",
	                "name.minikube.sigs.k8s.io": "embed-certs-684693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "22085ad7f6f83565912e09ed6d6b92de79d0f7b2fa701f5349b992d0e304b171",
	            "SandboxKey": "/var/run/docker/netns/22085ad7f6f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-684693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5ae0820826f166ee69d26403125a109290c4a58c28c34d1ba9a229995b23eef",
	                    "EndpointID": "0a1e038411df8b78a809dbf9eb228768b59a74cc197d076f9809d3c3b7d76276",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ee:41:a6:b1:a1:b7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-684693",
	                        "6098c312c5a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693: exit status 2 (331.81228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-684693 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-684693 logs -n 25: (1.391554875s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                                    │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                         │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ embed-certs-684693 image list --format=json                                                                                                                                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p embed-certs-684693 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:03:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:03:32.784386  296906 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:32.784681  296906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:32.784691  296906 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:32.784696  296906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:32.785006  296906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:32.785559  296906 out.go:368] Setting JSON to false
	I1225 19:03:32.786959  296906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2761,"bootTime":1766686652,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:03:32.787014  296906 start.go:143] virtualization: kvm guest
	I1225 19:03:32.789781  296906 out.go:179] * [newest-cni-731832] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:03:32.791138  296906 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:03:32.791160  296906 notify.go:221] Checking for updates...
	I1225 19:03:32.793576  296906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:03:32.794841  296906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:32.795989  296906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:03:32.797198  296906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:03:32.798242  296906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:03:32.799756  296906 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:32.799863  296906 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:32.799971  296906 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:32.800092  296906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:03:32.825754  296906 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:03:32.825853  296906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:32.881074  296906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:32.871109577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:32.881221  296906 docker.go:319] overlay module found
	I1225 19:03:32.883082  296906 out.go:179] * Using the docker driver based on user configuration
	I1225 19:03:32.884143  296906 start.go:309] selected driver: docker
	I1225 19:03:32.884159  296906 start.go:928] validating driver "docker" against <nil>
	I1225 19:03:32.884170  296906 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:03:32.884742  296906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:32.942734  296906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:32.933161539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:32.942943  296906 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1225 19:03:32.942972  296906 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1225 19:03:32.943270  296906 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 19:03:32.945600  296906 out.go:179] * Using Docker driver with root privileges
	I1225 19:03:32.946793  296906 cni.go:84] Creating CNI manager for ""
	I1225 19:03:32.946852  296906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:32.946873  296906 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:03:32.946987  296906 start.go:353] cluster config:
	{Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:32.948324  296906 out.go:179] * Starting "newest-cni-731832" primary control-plane node in "newest-cni-731832" cluster
	I1225 19:03:32.949391  296906 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:03:32.950620  296906 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:03:32.951663  296906 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1225 19:03:32.951693  296906 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1225 19:03:32.951710  296906 cache.go:65] Caching tarball of preloaded images
	I1225 19:03:32.951761  296906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:03:32.951787  296906 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:03:32.951795  296906 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1225 19:03:32.951882  296906 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/config.json ...
	I1225 19:03:32.951937  296906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/config.json: {Name:mkb10c92f3552c610a0c52b2c7838fb72bd11174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:32.972522  296906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:03:32.972540  296906 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:03:32.972555  296906 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:03:32.972580  296906 start.go:360] acquireMachinesLock for newest-cni-731832: {Name:mk069bfbc24c2c34510fc7ad141c2d655d217990 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:03:32.972669  296906 start.go:364] duration metric: took 74.146µs to acquireMachinesLock for "newest-cni-731832"
	I1225 19:03:32.972691  296906 start.go:93] Provisioning new machine with config: &{Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:32.972753  296906 start.go:125] createHost starting for "" (driver="docker")
	W1225 19:03:30.037343  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	W1225 19:03:32.535887  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 25 19:02:55 embed-certs-684693 crio[568]: time="2025-12-25T19:02:55.907693483Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 25 19:02:55 embed-certs-684693 crio[568]: time="2025-12-25T19:02:55.911221785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 25 19:02:55 embed-certs-684693 crio[568]: time="2025-12-25T19:02:55.911242838Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.059018013Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=728586e9-7e58-4074-ba03-1acb1d53b845 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.062182232Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3392ac9e-a64d-416d-b4e5-cd1b705fd9ca name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.065518604Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper" id=3c610850-b020-4e98-8c93-b1ae250a2c73 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.065758853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.07351927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.074179636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.10056414Z" level=info msg="Created container 8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper" id=3c610850-b020-4e98-8c93-b1ae250a2c73 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.101254103Z" level=info msg="Starting container: 8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d" id=cf2fa182-db39-4058-9774-5b773cccf18d name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.103252187Z" level=info msg="Started container" PID=1770 containerID=8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper id=cf2fa182-db39-4058-9774-5b773cccf18d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9e44378d708b0e0e5b8f655b83a9d2c22b17e97fa7e74f87b1e603d45235902
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.163545483Z" level=info msg="Removing container: c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c" id=7d6892e5-0c8b-41fe-b723-176101d972f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.173721428Z" level=info msg="Removed container c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper" id=7d6892e5-0c8b-41fe-b723-176101d972f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.176778412Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e1fe889-60b1-4396-b6d4-727f2997652e name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.177664641Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=37395421-7a12-40ee-bb53-be6934812f65 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.178711044Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1e32921a-7fdf-4607-924c-fe67cc0c493a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.178831282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.184774518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.184993713Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a8ec94d4392e5fe8de386a813f926ddc2ede3fcae163248965b3132fd374e9d0/merged/etc/passwd: no such file or directory"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.185027084Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a8ec94d4392e5fe8de386a813f926ddc2ede3fcae163248965b3132fd374e9d0/merged/etc/group: no such file or directory"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.185882775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.213272978Z" level=info msg="Created container 0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65: kube-system/storage-provisioner/storage-provisioner" id=1e32921a-7fdf-4607-924c-fe67cc0c493a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.213855928Z" level=info msg="Starting container: 0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65" id=f585a1a0-d7b9-47bd-bf1d-014b38205e5b name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.215881859Z" level=info msg="Started container" PID=1784 containerID=0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65 description=kube-system/storage-provisioner/storage-provisioner id=f585a1a0-d7b9-47bd-bf1d-014b38205e5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=450220d4e7fda9b3ac53de69fb4b0deca3b1bbe43eb7421c74695abfbbe8b257
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0586e432ee43e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   450220d4e7fda       storage-provisioner                          kube-system
	8b85a58f6727b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   c9e44378d708b       dashboard-metrics-scraper-6ffb444bf9-bcvcs   kubernetes-dashboard
	cd0104e7b2433       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   a583c9ce3f3d0       kubernetes-dashboard-855c9754f9-xv29k        kubernetes-dashboard
	fb0da24909dab       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   318e2c32cecba       busybox                                      default
	e3f10798d2c5c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   0e70e4d94ebc4       coredns-66bc5c9577-n4nqj                     kube-system
	ea9cdb66e74e5       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   0177ad6d25279       kindnet-gqdkf                                kube-system
	6be834e877742       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           52 seconds ago      Running             kube-proxy                  0                   99f788367208b       kube-proxy-wzb26                             kube-system
	294fb941f2913       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   450220d4e7fda       storage-provisioner                          kube-system
	8d7e8dc3eb792       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           55 seconds ago      Running             kube-scheduler              0                   c2b95216334e5       kube-scheduler-embed-certs-684693            kube-system
	8d2b7baedf500       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   77f839976da23       etcd-embed-certs-684693                      kube-system
	f163abb6ccc23       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           55 seconds ago      Running             kube-controller-manager     0                   dd75a8448dc07       kube-controller-manager-embed-certs-684693   kube-system
	96d9542c19721       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           55 seconds ago      Running             kube-apiserver              0                   a9186852dd71e       kube-apiserver-embed-certs-684693            kube-system
	
	
	==> coredns [e3f10798d2c5cc7aa34b0f7c0769cc5f3bc2ddad54195a5724aa2248050b4d45] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39266 - 49002 "HINFO IN 5977676950089410459.8639051078114193271. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020000086s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-684693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-684693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=embed-certs-684693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_01_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-684693
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:03:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:02:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-684693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                23021cb7-5678-4260-b426-ee2032296d45
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-n4nqj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-684693                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-gqdkf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-684693             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-684693    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-wzb26                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-684693             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bcvcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xv29k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-684693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-684693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-684693 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-684693 event: Registered Node embed-certs-684693 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-684693 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node embed-certs-684693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node embed-certs-684693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node embed-certs-684693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-684693 event: Registered Node embed-certs-684693 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d] <==
	{"level":"warn","ts":"2025-12-25T19:02:43.955658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.962705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.969299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.975646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.983439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.990303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.996873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.005989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.012460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.019037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.025497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.032138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.038673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.045490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.052876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.059773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.067544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.074877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.082173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.096782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.103485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.109978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.163001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:36.060982Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.401139ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357553590744098 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:631 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414985516735968288 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-25T19:03:36.061095Z","caller":"traceutil/trace.go:172","msg":"trace[532490420] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"233.128922ms","start":"2025-12-25T19:03:35.827951Z","end":"2025-12-25T19:03:36.061080Z","steps":["trace[532490420] 'process raft request'  (duration: 104.076463ms)","trace[532490420] 'compare'  (duration: 128.294023ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:03:38 up 46 min,  0 user,  load average: 2.91, 2.52, 1.82
	Linux embed-certs-684693 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea9cdb66e74e5837779af5d99fbae5b1f3b687573b29124b6deecdc991179c3c] <==
	I1225 19:02:45.680548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:02:45.680856       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1225 19:02:45.681047       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:02:45.681076       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:02:45.681121       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:02:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:02:45.891236       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:02:45.891272       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:02:45.891292       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:02:45.891968       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:02:46.392340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:02:46.392368       1 metrics.go:72] Registering metrics
	I1225 19:02:46.392442       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:55.892024       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:02:55.892095       1 main.go:301] handling current node
	I1225 19:03:05.892220       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:05.892265       1 main.go:301] handling current node
	I1225 19:03:15.891196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:15.891232       1 main.go:301] handling current node
	I1225 19:03:25.896208       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:25.896269       1 main.go:301] handling current node
	I1225 19:03:35.896661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:35.896695       1 main.go:301] handling current node
	
	
	==> kube-apiserver [96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d] <==
	I1225 19:02:44.645841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:02:44.646370       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1225 19:02:44.646584       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:02:44.646668       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 19:02:44.649399       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1225 19:02:44.649462       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1225 19:02:44.653740       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:02:44.659188       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1225 19:02:44.659230       1 aggregator.go:171] initial CRD sync complete...
	I1225 19:02:44.659239       1 autoregister_controller.go:144] Starting autoregister controller
	I1225 19:02:44.659247       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:02:44.659252       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:02:44.660425       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:02:44.683746       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:02:44.943126       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:02:44.969869       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:02:44.985944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:02:44.993160       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:02:44.999310       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:02:45.036374       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.136.43"}
	I1225 19:02:45.046773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.2.177"}
	I1225 19:02:45.548242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:02:48.231221       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:02:48.383082       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:02:48.430236       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238] <==
	I1225 19:02:47.954223       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1225 19:02:47.954233       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1225 19:02:47.955293       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1225 19:02:47.957780       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1225 19:02:47.967964       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1225 19:02:47.968029       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1225 19:02:47.968055       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1225 19:02:47.968060       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1225 19:02:47.968064       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1225 19:02:47.977928       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1225 19:02:47.977946       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1225 19:02:47.977936       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1225 19:02:47.977980       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1225 19:02:47.978024       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1225 19:02:47.978027       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1225 19:02:47.978045       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 19:02:47.978400       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1225 19:02:47.978548       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1225 19:02:47.979709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 19:02:47.979816       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 19:02:47.979933       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-684693"
	I1225 19:02:47.979992       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1225 19:02:47.983770       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:02:47.994886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 19:02:48.000042       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [6be834e877742b8bfa0bc2d501ed6913a2453ae40c561e27beb542006c7d47e6] <==
	I1225 19:02:45.437440       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:02:45.501727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 19:02:45.602222       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 19:02:45.602267       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1225 19:02:45.602335       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:02:45.623378       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:02:45.623444       1 server_linux.go:132] "Using iptables Proxier"
	I1225 19:02:45.629435       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:02:45.629909       1 server.go:527] "Version info" version="v1.34.3"
	I1225 19:02:45.629962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:45.631630       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:02:45.631661       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:02:45.631704       1 config.go:200] "Starting service config controller"
	I1225 19:02:45.631720       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:02:45.631728       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:02:45.631743       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:02:45.631799       1 config.go:309] "Starting node config controller"
	I1225 19:02:45.631807       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:02:45.731850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:02:45.731874       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:02:45.731923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:02:45.731877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818] <==
	I1225 19:02:43.435841       1 serving.go:386] Generated self-signed cert in-memory
	I1225 19:02:45.295740       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1225 19:02:45.295763       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:45.299686       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1225 19:02:45.299709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:45.299716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:45.299726       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1225 19:02:45.299729       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:45.299736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:45.300148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:02:45.300215       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:02:45.399877       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:45.399941       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:45.400195       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 25 19:02:48 embed-certs-684693 kubelet[730]: I1225 19:02:48.597642     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/22cdf105-cc29-4664-bb39-988c3cbbed55-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xv29k\" (UID: \"22cdf105-cc29-4664-bb39-988c3cbbed55\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xv29k"
	Dec 25 19:02:48 embed-certs-684693 kubelet[730]: I1225 19:02:48.597655     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgp9\" (UniqueName: \"kubernetes.io/projected/22cdf105-cc29-4664-bb39-988c3cbbed55-kube-api-access-dbgp9\") pod \"kubernetes-dashboard-855c9754f9-xv29k\" (UID: \"22cdf105-cc29-4664-bb39-988c3cbbed55\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xv29k"
	Dec 25 19:02:51 embed-certs-684693 kubelet[730]: I1225 19:02:51.103238     730 scope.go:117] "RemoveContainer" containerID="2eab8063837722e0a2e2b694e6c1c8f12f9e668b63dac3bb644127accb3fffce"
	Dec 25 19:02:51 embed-certs-684693 kubelet[730]: I1225 19:02:51.152097     730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 25 19:02:52 embed-certs-684693 kubelet[730]: I1225 19:02:52.107428     730 scope.go:117] "RemoveContainer" containerID="2eab8063837722e0a2e2b694e6c1c8f12f9e668b63dac3bb644127accb3fffce"
	Dec 25 19:02:52 embed-certs-684693 kubelet[730]: I1225 19:02:52.107603     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:02:52 embed-certs-684693 kubelet[730]: E1225 19:02:52.107797     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:02:53 embed-certs-684693 kubelet[730]: I1225 19:02:53.113652     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:02:53 embed-certs-684693 kubelet[730]: E1225 19:02:53.113848     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:02:54 embed-certs-684693 kubelet[730]: I1225 19:02:54.128467     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xv29k" podStartSLOduration=1.101966603 podStartE2EDuration="6.128443806s" podCreationTimestamp="2025-12-25 19:02:48 +0000 UTC" firstStartedPulling="2025-12-25 19:02:48.828144052 +0000 UTC m=+6.873089429" lastFinishedPulling="2025-12-25 19:02:53.854621247 +0000 UTC m=+11.899566632" observedRunningTime="2025-12-25 19:02:54.128287055 +0000 UTC m=+12.173232449" watchObservedRunningTime="2025-12-25 19:02:54.128443806 +0000 UTC m=+12.173389200"
	Dec 25 19:02:56 embed-certs-684693 kubelet[730]: I1225 19:02:56.908990     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:02:56 embed-certs-684693 kubelet[730]: E1225 19:02:56.909167     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: I1225 19:03:11.058458     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: I1225 19:03:11.162120     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: I1225 19:03:11.162335     730 scope.go:117] "RemoveContainer" containerID="8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: E1225 19:03:11.162539     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:16 embed-certs-684693 kubelet[730]: I1225 19:03:16.176363     730 scope.go:117] "RemoveContainer" containerID="294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b"
	Dec 25 19:03:16 embed-certs-684693 kubelet[730]: I1225 19:03:16.910040     730 scope.go:117] "RemoveContainer" containerID="8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	Dec 25 19:03:16 embed-certs-684693 kubelet[730]: E1225 19:03:16.910246     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:29 embed-certs-684693 kubelet[730]: I1225 19:03:29.058367     730 scope.go:117] "RemoveContainer" containerID="8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	Dec 25 19:03:29 embed-certs-684693 kubelet[730]: E1225 19:03:29.058553     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: kubelet.service: Consumed 1.672s CPU time.
	
	
	==> kubernetes-dashboard [cd0104e7b2433665e7a7678289b4f5de2377208d5e5b7d7a93d384d481448c5f] <==
	2025/12/25 19:02:53 Using namespace: kubernetes-dashboard
	2025/12/25 19:02:53 Using in-cluster config to connect to apiserver
	2025/12/25 19:02:53 Using secret token for csrf signing
	2025/12/25 19:02:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:02:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:02:53 Successful initial request to the apiserver, version: v1.34.3
	2025/12/25 19:02:53 Generating JWE encryption key
	2025/12/25 19:02:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:02:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:02:54 Initializing JWE encryption key from synchronized object
	2025/12/25 19:02:54 Creating in-cluster Sidecar client
	2025/12/25 19:02:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:54 Serving insecurely on HTTP port: 9090
	2025/12/25 19:03:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:53 Starting overwatch
	
	
	==> storage-provisioner [0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65] <==
	I1225 19:03:16.229704       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:03:16.239123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:03:16.239175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:03:16.241218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:19.695852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:23.956105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:27.554492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:30.608466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:33.630507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:33.636245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:33.636430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:03:33.636520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9f66672-eb7f-41d5-8fa8-7c79d48325e3", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-684693_e3448911-f325-4f67-a4f0-27f111e2b194 became leader
	I1225 19:03:33.636697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-684693_e3448911-f325-4f67-a4f0-27f111e2b194!
	W1225 19:03:33.638384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:33.642404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:33.737677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-684693_e3448911-f325-4f67-a4f0-27f111e2b194!
	W1225 19:03:35.645438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:35.677703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:37.681622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:37.688001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b] <==
	I1225 19:02:45.410582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:03:15.414289       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-684693 -n embed-certs-684693
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-684693 -n embed-certs-684693: exit status 2 (326.890153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-684693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-684693
helpers_test.go:244: (dbg) docker inspect embed-certs-684693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca",
	        "Created": "2025-12-25T19:01:30.292736794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:02:34.567939086Z",
	            "FinishedAt": "2025-12-25T19:02:33.689971042Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/hosts",
	        "LogPath": "/var/lib/docker/containers/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca/6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca-json.log",
	        "Name": "/embed-certs-684693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-684693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-684693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6098c312c5a2ed6ee82f457e7f448de16796cbbcc23aaa5c659a80de165095ca",
	                "LowerDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33e9c790cbddae9e88f8f10faf1c8c8e9f7c8f596b2ebc8b3c765318689791e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-684693",
	                "Source": "/var/lib/docker/volumes/embed-certs-684693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-684693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-684693",
	                "name.minikube.sigs.k8s.io": "embed-certs-684693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "22085ad7f6f83565912e09ed6d6b92de79d0f7b2fa701f5349b992d0e304b171",
	            "SandboxKey": "/var/run/docker/netns/22085ad7f6f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-684693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5ae0820826f166ee69d26403125a109290c4a58c28c34d1ba9a229995b23eef",
	                    "EndpointID": "0a1e038411df8b78a809dbf9eb228768b59a74cc197d076f9809d3c3b7d76276",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ee:41:a6:b1:a1:b7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-684693",
	                        "6098c312c5a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693: exit status 2 (327.313335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-684693 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-684693 logs -n 25: (1.11637459s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-163446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │                     │
	│ stop    │ -p old-k8s-version-163446 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:01 UTC │
	│ start   │ -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:01 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                                    │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                         │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ embed-certs-684693 image list --format=json                                                                                                                                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p embed-certs-684693 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:03:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:03:32.784386  296906 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:32.784681  296906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:32.784691  296906 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:32.784696  296906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:32.785006  296906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:32.785559  296906 out.go:368] Setting JSON to false
	I1225 19:03:32.786959  296906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2761,"bootTime":1766686652,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:03:32.787014  296906 start.go:143] virtualization: kvm guest
	I1225 19:03:32.789781  296906 out.go:179] * [newest-cni-731832] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:03:32.791138  296906 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:03:32.791160  296906 notify.go:221] Checking for updates...
	I1225 19:03:32.793576  296906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:03:32.794841  296906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:32.795989  296906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:03:32.797198  296906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:03:32.798242  296906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:03:32.799756  296906 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:32.799863  296906 config.go:182] Loaded profile config "embed-certs-684693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:32.799971  296906 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:32.800092  296906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:03:32.825754  296906 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:03:32.825853  296906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:32.881074  296906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:32.871109577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:32.881221  296906 docker.go:319] overlay module found
	I1225 19:03:32.883082  296906 out.go:179] * Using the docker driver based on user configuration
	I1225 19:03:32.884143  296906 start.go:309] selected driver: docker
	I1225 19:03:32.884159  296906 start.go:928] validating driver "docker" against <nil>
	I1225 19:03:32.884170  296906 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:03:32.884742  296906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:32.942734  296906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:32.933161539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:32.942943  296906 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1225 19:03:32.942972  296906 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1225 19:03:32.943270  296906 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 19:03:32.945600  296906 out.go:179] * Using Docker driver with root privileges
	I1225 19:03:32.946793  296906 cni.go:84] Creating CNI manager for ""
	I1225 19:03:32.946852  296906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:32.946873  296906 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:03:32.946987  296906 start.go:353] cluster config:
	{Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:32.948324  296906 out.go:179] * Starting "newest-cni-731832" primary control-plane node in "newest-cni-731832" cluster
	I1225 19:03:32.949391  296906 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:03:32.950620  296906 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:03:32.951663  296906 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1225 19:03:32.951693  296906 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1225 19:03:32.951710  296906 cache.go:65] Caching tarball of preloaded images
	I1225 19:03:32.951761  296906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:03:32.951787  296906 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:03:32.951795  296906 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1225 19:03:32.951882  296906 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/config.json ...
	I1225 19:03:32.951937  296906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/config.json: {Name:mkb10c92f3552c610a0c52b2c7838fb72bd11174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:32.972522  296906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:03:32.972540  296906 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:03:32.972555  296906 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:03:32.972580  296906 start.go:360] acquireMachinesLock for newest-cni-731832: {Name:mk069bfbc24c2c34510fc7ad141c2d655d217990 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:03:32.972669  296906 start.go:364] duration metric: took 74.146µs to acquireMachinesLock for "newest-cni-731832"
	I1225 19:03:32.972691  296906 start.go:93] Provisioning new machine with config: &{Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:32.972753  296906 start.go:125] createHost starting for "" (driver="docker")
	W1225 19:03:30.037343  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	W1225 19:03:32.535887  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	I1225 19:03:34.163573  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:34.164022  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:34.164073  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:34.164124  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:34.194190  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:34.194213  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:34.194219  260034 cri.go:96] found id: ""
	I1225 19:03:34.194227  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:34.194284  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.198239  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.201831  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:34.201908  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:34.230151  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:34.230171  260034 cri.go:96] found id: ""
	I1225 19:03:34.230180  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:34.230244  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.234261  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:34.234322  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:34.262802  260034 cri.go:96] found id: ""
	I1225 19:03:34.262836  260034 logs.go:282] 0 containers: []
	W1225 19:03:34.262848  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:34.262856  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:34.262938  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:34.289006  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:34.289025  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:34.289029  260034 cri.go:96] found id: ""
	I1225 19:03:34.289036  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:34.289085  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.293178  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.296915  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:34.296993  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:34.325966  260034 cri.go:96] found id: ""
	I1225 19:03:34.325997  260034 logs.go:282] 0 containers: []
	W1225 19:03:34.326009  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:34.326016  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:34.326063  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:34.359662  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:34.359687  260034 cri.go:96] found id: "4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:34.359694  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:34.359699  260034 cri.go:96] found id: ""
	I1225 19:03:34.359709  260034 logs.go:282] 3 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:34.359769  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.363818  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.367846  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:34.371551  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:34.371617  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:34.402968  260034 cri.go:96] found id: ""
	I1225 19:03:34.402996  260034 logs.go:282] 0 containers: []
	W1225 19:03:34.403007  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:34.403015  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:34.403074  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:34.431863  260034 cri.go:96] found id: ""
	I1225 19:03:34.431886  260034 logs.go:282] 0 containers: []
	W1225 19:03:34.431917  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:34.431930  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:34.431944  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:34.519787  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:34.519828  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:34.534689  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:34.534722  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:34.572954  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:34.572985  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:34.613584  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:34.613619  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:34.651423  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:03:34.651450  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:34.684314  260034 logs.go:123] Gathering logs for kube-controller-manager [4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db] ...
	I1225 19:03:34.684343  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d705cb1d8d35bb521ed14e8e331d8b60738d540fae000680afb7e9d190d17db"
	I1225 19:03:34.712963  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:34.712995  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:34.775783  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:34.775810  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:34.775830  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:34.809301  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:34.809336  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:34.846254  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:34.846282  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:34.876948  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:34.876970  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:34.932244  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:34.932276  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:37.472989  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:37.474023  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:37.474084  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:37.474149  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:37.508842  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:37.508866  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:37.508870  260034 cri.go:96] found id: ""
	I1225 19:03:37.508877  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:37.508943  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:37.513843  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:37.518197  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:37.518259  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:37.548423  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:37.548447  260034 cri.go:96] found id: ""
	I1225 19:03:37.548457  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:37.548510  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:37.553476  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:37.553540  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:32.974759  296906 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:03:32.975014  296906 start.go:159] libmachine.API.Create for "newest-cni-731832" (driver="docker")
	I1225 19:03:32.975046  296906 client.go:173] LocalClient.Create starting
	I1225 19:03:32.975101  296906 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:03:32.975132  296906 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:32.975151  296906 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:32.975202  296906 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:03:32.975233  296906 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:32.975242  296906 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:32.975545  296906 cli_runner.go:164] Run: docker network inspect newest-cni-731832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:03:32.992518  296906 cli_runner.go:211] docker network inspect newest-cni-731832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:03:32.992604  296906 network_create.go:284] running [docker network inspect newest-cni-731832] to gather additional debugging logs...
	I1225 19:03:32.992626  296906 cli_runner.go:164] Run: docker network inspect newest-cni-731832
	W1225 19:03:33.009126  296906 cli_runner.go:211] docker network inspect newest-cni-731832 returned with exit code 1
	I1225 19:03:33.009158  296906 network_create.go:287] error running [docker network inspect newest-cni-731832]: docker network inspect newest-cni-731832: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-731832 not found
	I1225 19:03:33.009172  296906 network_create.go:289] output of [docker network inspect newest-cni-731832]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-731832 not found
	
	** /stderr **
	I1225 19:03:33.009271  296906 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:33.026958  296906 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:03:33.027674  296906 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:03:33.028501  296906 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:03:33.029113  296906 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5ae0820826f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:16:14:1f:73:da} reservation:<nil>}
	I1225 19:03:33.029954  296906 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e417f0}
	I1225 19:03:33.029984  296906 network_create.go:124] attempt to create docker network newest-cni-731832 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1225 19:03:33.030037  296906 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-731832 newest-cni-731832
	I1225 19:03:33.080026  296906 network_create.go:108] docker network newest-cni-731832 192.168.85.0/24 created
	I1225 19:03:33.080062  296906 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-731832" container
	I1225 19:03:33.080140  296906 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:03:33.097303  296906 cli_runner.go:164] Run: docker volume create newest-cni-731832 --label name.minikube.sigs.k8s.io=newest-cni-731832 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:03:33.115473  296906 oci.go:103] Successfully created a docker volume newest-cni-731832
	I1225 19:03:33.115556  296906 cli_runner.go:164] Run: docker run --rm --name newest-cni-731832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-731832 --entrypoint /usr/bin/test -v newest-cni-731832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:03:33.496729  296906 oci.go:107] Successfully prepared a docker volume newest-cni-731832
	I1225 19:03:33.496796  296906 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1225 19:03:33.496814  296906 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:03:33.496925  296906 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-731832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:03:37.425953  296906 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-731832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.928935051s)
	I1225 19:03:37.425986  296906 kic.go:203] duration metric: took 3.929168316s to extract preloaded images to volume ...
	W1225 19:03:37.426088  296906 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:03:37.426144  296906 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:03:37.426192  296906 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:03:37.493805  296906 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-731832 --name newest-cni-731832 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-731832 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-731832 --network newest-cni-731832 --ip 192.168.85.2 --volume newest-cni-731832:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	W1225 19:03:34.536394  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	W1225 19:03:37.037178  290541 node_ready.go:57] node "default-k8s-diff-port-960022" has "Ready":"False" status (will retry)
	I1225 19:03:38.037946  290541 node_ready.go:49] node "default-k8s-diff-port-960022" is "Ready"
	I1225 19:03:38.038044  290541 node_ready.go:38] duration metric: took 12.004880184s for node "default-k8s-diff-port-960022" to be "Ready" ...
	I1225 19:03:38.038070  290541 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:03:38.038120  290541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:03:38.053445  290541 api_server.go:72] duration metric: took 12.345422409s to wait for apiserver process to appear ...
	I1225 19:03:38.053475  290541 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:03:38.053498  290541 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1225 19:03:38.058130  290541 api_server.go:325] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1225 19:03:38.059289  290541 api_server.go:141] control plane version: v1.34.3
	I1225 19:03:38.059316  290541 api_server.go:131] duration metric: took 5.833449ms to wait for apiserver health ...
	I1225 19:03:38.059327  290541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:03:38.062944  290541 system_pods.go:59] 8 kube-system pods found
	I1225 19:03:38.062979  290541 system_pods.go:61] "coredns-66bc5c9577-c9wmz" [773864bb-884f-4d15-9364-d587199c3d06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:03:38.062989  290541 system_pods.go:61] "etcd-default-k8s-diff-port-960022" [8a573ecd-55c7-4c08-949a-5c67b684c324] Running
	I1225 19:03:38.063007  290541 system_pods.go:61] "kindnet-hj6rr" [74edb28b-8829-4f8f-b2e9-caa22db0d2f6] Running
	I1225 19:03:38.063020  290541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-960022" [21741cff-b12b-4274-8025-ef0593d265ad] Running
	I1225 19:03:38.063030  290541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-960022" [4d468930-1722-45c3-b16e-3a6dfdc365e9] Running
	I1225 19:03:38.063037  290541 system_pods.go:61] "kube-proxy-wl784" [11627834-f71f-4055-a738-189d56587a73] Running
	I1225 19:03:38.063045  290541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-960022" [d759ed68-b89c-4594-b37b-bf75c3105863] Running
	I1225 19:03:38.063051  290541 system_pods.go:61] "storage-provisioner" [9266e19b-60fd-4de1-bf8a-1998b627e8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:03:38.063062  290541 system_pods.go:74] duration metric: took 3.728109ms to wait for pod list to return data ...
	I1225 19:03:38.063076  290541 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:03:38.066163  290541 default_sa.go:45] found service account: "default"
	I1225 19:03:38.066183  290541 default_sa.go:55] duration metric: took 3.101681ms for default service account to be created ...
	I1225 19:03:38.066194  290541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:03:38.069657  290541 system_pods.go:86] 8 kube-system pods found
	I1225 19:03:38.069690  290541 system_pods.go:89] "coredns-66bc5c9577-c9wmz" [773864bb-884f-4d15-9364-d587199c3d06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:03:38.069698  290541 system_pods.go:89] "etcd-default-k8s-diff-port-960022" [8a573ecd-55c7-4c08-949a-5c67b684c324] Running
	I1225 19:03:38.069705  290541 system_pods.go:89] "kindnet-hj6rr" [74edb28b-8829-4f8f-b2e9-caa22db0d2f6] Running
	I1225 19:03:38.069712  290541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-960022" [21741cff-b12b-4274-8025-ef0593d265ad] Running
	I1225 19:03:38.069718  290541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-960022" [4d468930-1722-45c3-b16e-3a6dfdc365e9] Running
	I1225 19:03:38.069724  290541 system_pods.go:89] "kube-proxy-wl784" [11627834-f71f-4055-a738-189d56587a73] Running
	I1225 19:03:38.069729  290541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-960022" [d759ed68-b89c-4594-b37b-bf75c3105863] Running
	I1225 19:03:38.069736  290541 system_pods.go:89] "storage-provisioner" [9266e19b-60fd-4de1-bf8a-1998b627e8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:03:38.069776  290541 retry.go:84] will retry after 200ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 25 19:02:55 embed-certs-684693 crio[568]: time="2025-12-25T19:02:55.907693483Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 25 19:02:55 embed-certs-684693 crio[568]: time="2025-12-25T19:02:55.911221785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 25 19:02:55 embed-certs-684693 crio[568]: time="2025-12-25T19:02:55.911242838Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.059018013Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=728586e9-7e58-4074-ba03-1acb1d53b845 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.062182232Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3392ac9e-a64d-416d-b4e5-cd1b705fd9ca name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.065518604Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper" id=3c610850-b020-4e98-8c93-b1ae250a2c73 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.065758853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.07351927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.074179636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.10056414Z" level=info msg="Created container 8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper" id=3c610850-b020-4e98-8c93-b1ae250a2c73 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.101254103Z" level=info msg="Starting container: 8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d" id=cf2fa182-db39-4058-9774-5b773cccf18d name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.103252187Z" level=info msg="Started container" PID=1770 containerID=8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper id=cf2fa182-db39-4058-9774-5b773cccf18d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9e44378d708b0e0e5b8f655b83a9d2c22b17e97fa7e74f87b1e603d45235902
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.163545483Z" level=info msg="Removing container: c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c" id=7d6892e5-0c8b-41fe-b723-176101d972f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:11 embed-certs-684693 crio[568]: time="2025-12-25T19:03:11.173721428Z" level=info msg="Removed container c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs/dashboard-metrics-scraper" id=7d6892e5-0c8b-41fe-b723-176101d972f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.176778412Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e1fe889-60b1-4396-b6d4-727f2997652e name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.177664641Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=37395421-7a12-40ee-bb53-be6934812f65 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.178711044Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1e32921a-7fdf-4607-924c-fe67cc0c493a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.178831282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.184774518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.184993713Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a8ec94d4392e5fe8de386a813f926ddc2ede3fcae163248965b3132fd374e9d0/merged/etc/passwd: no such file or directory"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.185027084Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a8ec94d4392e5fe8de386a813f926ddc2ede3fcae163248965b3132fd374e9d0/merged/etc/group: no such file or directory"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.185882775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.213272978Z" level=info msg="Created container 0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65: kube-system/storage-provisioner/storage-provisioner" id=1e32921a-7fdf-4607-924c-fe67cc0c493a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.213855928Z" level=info msg="Starting container: 0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65" id=f585a1a0-d7b9-47bd-bf1d-014b38205e5b name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:16 embed-certs-684693 crio[568]: time="2025-12-25T19:03:16.215881859Z" level=info msg="Started container" PID=1784 containerID=0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65 description=kube-system/storage-provisioner/storage-provisioner id=f585a1a0-d7b9-47bd-bf1d-014b38205e5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=450220d4e7fda9b3ac53de69fb4b0deca3b1bbe43eb7421c74695abfbbe8b257
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0586e432ee43e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   450220d4e7fda       storage-provisioner                          kube-system
	8b85a58f6727b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   c9e44378d708b       dashboard-metrics-scraper-6ffb444bf9-bcvcs   kubernetes-dashboard
	cd0104e7b2433       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   a583c9ce3f3d0       kubernetes-dashboard-855c9754f9-xv29k        kubernetes-dashboard
	fb0da24909dab       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   318e2c32cecba       busybox                                      default
	e3f10798d2c5c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   0e70e4d94ebc4       coredns-66bc5c9577-n4nqj                     kube-system
	ea9cdb66e74e5       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   0177ad6d25279       kindnet-gqdkf                                kube-system
	6be834e877742       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           54 seconds ago      Running             kube-proxy                  0                   99f788367208b       kube-proxy-wzb26                             kube-system
	294fb941f2913       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   450220d4e7fda       storage-provisioner                          kube-system
	8d7e8dc3eb792       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           57 seconds ago      Running             kube-scheduler              0                   c2b95216334e5       kube-scheduler-embed-certs-684693            kube-system
	8d2b7baedf500       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   77f839976da23       etcd-embed-certs-684693                      kube-system
	f163abb6ccc23       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           57 seconds ago      Running             kube-controller-manager     0                   dd75a8448dc07       kube-controller-manager-embed-certs-684693   kube-system
	96d9542c19721       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           57 seconds ago      Running             kube-apiserver              0                   a9186852dd71e       kube-apiserver-embed-certs-684693            kube-system
	
	
	==> coredns [e3f10798d2c5cc7aa34b0f7c0769cc5f3bc2ddad54195a5724aa2248050b4d45] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39266 - 49002 "HINFO IN 5977676950089410459.8639051078114193271. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020000086s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-684693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-684693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=embed-certs-684693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_01_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-684693
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:03:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:01:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:03:15 +0000   Thu, 25 Dec 2025 19:02:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-684693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                23021cb7-5678-4260-b426-ee2032296d45
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-n4nqj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-embed-certs-684693                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-gqdkf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-684693             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-684693    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-wzb26                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-684693             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bcvcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xv29k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-684693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-684693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-684693 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-684693 event: Registered Node embed-certs-684693 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-684693 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-684693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-684693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-684693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-684693 event: Registered Node embed-certs-684693 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8d2b7baedf500ee7f1bfe8f8dd198f5e17d7d4765eb8784fa1263ff20a37911d] <==
	{"level":"warn","ts":"2025-12-25T19:02:43.955658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.962705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.969299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.975646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.983439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.990303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:43.996873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.005989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.012460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.019037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.025497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.032138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.038673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.045490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.052876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.059773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.067544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.074877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.082173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.096782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.103485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.109978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:02:44.163001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:36.060982Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.401139ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357553590744098 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:631 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414985516735968288 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-25T19:03:36.061095Z","caller":"traceutil/trace.go:172","msg":"trace[532490420] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"233.128922ms","start":"2025-12-25T19:03:35.827951Z","end":"2025-12-25T19:03:36.061080Z","steps":["trace[532490420] 'process raft request'  (duration: 104.076463ms)","trace[532490420] 'compare'  (duration: 128.294023ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:03:40 up 46 min,  0 user,  load average: 2.84, 2.51, 1.82
	Linux embed-certs-684693 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea9cdb66e74e5837779af5d99fbae5b1f3b687573b29124b6deecdc991179c3c] <==
	I1225 19:02:45.680548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:02:45.680856       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1225 19:02:45.681047       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:02:45.681076       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:02:45.681121       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:02:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:02:45.891236       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:02:45.891272       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:02:45.891292       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:02:45.891968       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:02:46.392340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:02:46.392368       1 metrics.go:72] Registering metrics
	I1225 19:02:46.392442       1 controller.go:711] "Syncing nftables rules"
	I1225 19:02:55.892024       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:02:55.892095       1 main.go:301] handling current node
	I1225 19:03:05.892220       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:05.892265       1 main.go:301] handling current node
	I1225 19:03:15.891196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:15.891232       1 main.go:301] handling current node
	I1225 19:03:25.896208       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:25.896269       1 main.go:301] handling current node
	I1225 19:03:35.896661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1225 19:03:35.896695       1 main.go:301] handling current node
	
	
	==> kube-apiserver [96d9542c197212f0c05bc896dbb04b02a41cb77ea63e21dd98bd9fec4091843d] <==
	I1225 19:02:44.645841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:02:44.646370       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1225 19:02:44.646584       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:02:44.646668       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 19:02:44.649399       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1225 19:02:44.649462       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1225 19:02:44.653740       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:02:44.659188       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1225 19:02:44.659230       1 aggregator.go:171] initial CRD sync complete...
	I1225 19:02:44.659239       1 autoregister_controller.go:144] Starting autoregister controller
	I1225 19:02:44.659247       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:02:44.659252       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:02:44.660425       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:02:44.683746       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:02:44.943126       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:02:44.969869       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:02:44.985944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:02:44.993160       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:02:44.999310       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:02:45.036374       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.136.43"}
	I1225 19:02:45.046773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.2.177"}
	I1225 19:02:45.548242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:02:48.231221       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:02:48.383082       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:02:48.430236       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f163abb6ccc23812b01aab1787a1e9cb17c7aa29ac0031c5d3d528bd0d223238] <==
	I1225 19:02:47.954223       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1225 19:02:47.954233       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1225 19:02:47.955293       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1225 19:02:47.957780       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1225 19:02:47.967964       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1225 19:02:47.968029       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1225 19:02:47.968055       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1225 19:02:47.968060       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1225 19:02:47.968064       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1225 19:02:47.977928       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1225 19:02:47.977946       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1225 19:02:47.977936       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1225 19:02:47.977980       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1225 19:02:47.978024       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1225 19:02:47.978027       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1225 19:02:47.978045       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 19:02:47.978400       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1225 19:02:47.978548       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1225 19:02:47.979709       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 19:02:47.979816       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 19:02:47.979933       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-684693"
	I1225 19:02:47.979992       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1225 19:02:47.983770       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:02:47.994886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 19:02:48.000042       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [6be834e877742b8bfa0bc2d501ed6913a2453ae40c561e27beb542006c7d47e6] <==
	I1225 19:02:45.437440       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:02:45.501727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 19:02:45.602222       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 19:02:45.602267       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1225 19:02:45.602335       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:02:45.623378       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:02:45.623444       1 server_linux.go:132] "Using iptables Proxier"
	I1225 19:02:45.629435       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:02:45.629909       1 server.go:527] "Version info" version="v1.34.3"
	I1225 19:02:45.629962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:45.631630       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:02:45.631661       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:02:45.631704       1 config.go:200] "Starting service config controller"
	I1225 19:02:45.631720       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:02:45.631728       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:02:45.631743       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:02:45.631799       1 config.go:309] "Starting node config controller"
	I1225 19:02:45.631807       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:02:45.731850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:02:45.731874       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:02:45.731923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:02:45.731877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8d7e8dc3eb792d198de0248572b5e18d4499c1684bda9bf5f17def41a2fab818] <==
	I1225 19:02:43.435841       1 serving.go:386] Generated self-signed cert in-memory
	I1225 19:02:45.295740       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1225 19:02:45.295763       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:02:45.299686       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1225 19:02:45.299709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:45.299716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:45.299726       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1225 19:02:45.299729       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:45.299736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:45.300148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:02:45.300215       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:02:45.399877       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:02:45.399941       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1225 19:02:45.400195       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 25 19:02:48 embed-certs-684693 kubelet[730]: I1225 19:02:48.597642     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/22cdf105-cc29-4664-bb39-988c3cbbed55-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xv29k\" (UID: \"22cdf105-cc29-4664-bb39-988c3cbbed55\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xv29k"
	Dec 25 19:02:48 embed-certs-684693 kubelet[730]: I1225 19:02:48.597655     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgp9\" (UniqueName: \"kubernetes.io/projected/22cdf105-cc29-4664-bb39-988c3cbbed55-kube-api-access-dbgp9\") pod \"kubernetes-dashboard-855c9754f9-xv29k\" (UID: \"22cdf105-cc29-4664-bb39-988c3cbbed55\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xv29k"
	Dec 25 19:02:51 embed-certs-684693 kubelet[730]: I1225 19:02:51.103238     730 scope.go:117] "RemoveContainer" containerID="2eab8063837722e0a2e2b694e6c1c8f12f9e668b63dac3bb644127accb3fffce"
	Dec 25 19:02:51 embed-certs-684693 kubelet[730]: I1225 19:02:51.152097     730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 25 19:02:52 embed-certs-684693 kubelet[730]: I1225 19:02:52.107428     730 scope.go:117] "RemoveContainer" containerID="2eab8063837722e0a2e2b694e6c1c8f12f9e668b63dac3bb644127accb3fffce"
	Dec 25 19:02:52 embed-certs-684693 kubelet[730]: I1225 19:02:52.107603     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:02:52 embed-certs-684693 kubelet[730]: E1225 19:02:52.107797     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:02:53 embed-certs-684693 kubelet[730]: I1225 19:02:53.113652     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:02:53 embed-certs-684693 kubelet[730]: E1225 19:02:53.113848     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:02:54 embed-certs-684693 kubelet[730]: I1225 19:02:54.128467     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xv29k" podStartSLOduration=1.101966603 podStartE2EDuration="6.128443806s" podCreationTimestamp="2025-12-25 19:02:48 +0000 UTC" firstStartedPulling="2025-12-25 19:02:48.828144052 +0000 UTC m=+6.873089429" lastFinishedPulling="2025-12-25 19:02:53.854621247 +0000 UTC m=+11.899566632" observedRunningTime="2025-12-25 19:02:54.128287055 +0000 UTC m=+12.173232449" watchObservedRunningTime="2025-12-25 19:02:54.128443806 +0000 UTC m=+12.173389200"
	Dec 25 19:02:56 embed-certs-684693 kubelet[730]: I1225 19:02:56.908990     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:02:56 embed-certs-684693 kubelet[730]: E1225 19:02:56.909167     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: I1225 19:03:11.058458     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: I1225 19:03:11.162120     730 scope.go:117] "RemoveContainer" containerID="c3217d9c195c881876d38490fc5fa9a60e72aacb9861ebb97fb47adc20058b6c"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: I1225 19:03:11.162335     730 scope.go:117] "RemoveContainer" containerID="8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	Dec 25 19:03:11 embed-certs-684693 kubelet[730]: E1225 19:03:11.162539     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:16 embed-certs-684693 kubelet[730]: I1225 19:03:16.176363     730 scope.go:117] "RemoveContainer" containerID="294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b"
	Dec 25 19:03:16 embed-certs-684693 kubelet[730]: I1225 19:03:16.910040     730 scope.go:117] "RemoveContainer" containerID="8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	Dec 25 19:03:16 embed-certs-684693 kubelet[730]: E1225 19:03:16.910246     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:29 embed-certs-684693 kubelet[730]: I1225 19:03:29.058367     730 scope.go:117] "RemoveContainer" containerID="8b85a58f6727b85925d66ae7c892925d7f0d6ad84cf0a49ac39c7dac9256cb8d"
	Dec 25 19:03:29 embed-certs-684693 kubelet[730]: E1225 19:03:29.058553     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bcvcs_kubernetes-dashboard(3d92e4d7-4fa3-464f-b547-80838b400c09)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bcvcs" podUID="3d92e4d7-4fa3-464f-b547-80838b400c09"
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:03:35 embed-certs-684693 systemd[1]: kubelet.service: Consumed 1.672s CPU time.
	
	
	==> kubernetes-dashboard [cd0104e7b2433665e7a7678289b4f5de2377208d5e5b7d7a93d384d481448c5f] <==
	2025/12/25 19:02:53 Using namespace: kubernetes-dashboard
	2025/12/25 19:02:53 Using in-cluster config to connect to apiserver
	2025/12/25 19:02:53 Using secret token for csrf signing
	2025/12/25 19:02:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:02:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:02:53 Successful initial request to the apiserver, version: v1.34.3
	2025/12/25 19:02:53 Generating JWE encryption key
	2025/12/25 19:02:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:02:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:02:54 Initializing JWE encryption key from synchronized object
	2025/12/25 19:02:54 Creating in-cluster Sidecar client
	2025/12/25 19:02:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:54 Serving insecurely on HTTP port: 9090
	2025/12/25 19:03:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:02:53 Starting overwatch
	
	
	==> storage-provisioner [0586e432ee43e609607ade028951a4592cec0588e09512fa74f54317148acb65] <==
	I1225 19:03:16.229704       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:03:16.239123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:03:16.239175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:03:16.241218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:19.695852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:23.956105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:27.554492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:30.608466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:33.630507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:33.636245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:33.636430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:03:33.636520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9f66672-eb7f-41d5-8fa8-7c79d48325e3", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-684693_e3448911-f325-4f67-a4f0-27f111e2b194 became leader
	I1225 19:03:33.636697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-684693_e3448911-f325-4f67-a4f0-27f111e2b194!
	W1225 19:03:33.638384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:33.642404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:33.737677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-684693_e3448911-f325-4f67-a4f0-27f111e2b194!
	W1225 19:03:35.645438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:35.677703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:37.681622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:37.688001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:39.691007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:39.695975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [294fb941f29133cb40754cbd33757b426445328bda2c2356fe6d08b22884da2b] <==
	I1225 19:02:45.410582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:03:15.414289       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-684693 -n embed-certs-684693
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-684693 -n embed-certs-684693: exit status 2 (347.93755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-684693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.444151ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-960022 describe deploy/metrics-server -n kube-system: exit status 1 (74.64843ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-960022 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-960022
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-960022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f",
	        "Created": "2025-12-25T19:03:07.962087481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:03:07.991952064Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/hosts",
	        "LogPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f-json.log",
	        "Name": "/default-k8s-diff-port-960022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-960022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-960022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f",
	                "LowerDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-960022",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-960022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-960022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-960022",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-960022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "68d477b24f39bc5e94fd7a1fb93e18fbf797749bbdf0367f473b8c71471e3ed0",
	            "SandboxKey": "/var/run/docker/netns/68d477b24f39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-960022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6496648f4bb9e6db2a787d51dc81aaa3ff1aaea70439b67d588aff1a80515c8b",
	                    "EndpointID": "2285b62054c196af993dfe5115cc0c8a1932b1747ea3f09c0b808e4817efeec7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1e:29:ad:59:14:4a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-960022",
	                        "e715f5c007f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-960022 logs -n 25
E1225 19:03:52.420355    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-960022 logs -n 25: (1.077548213s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p no-preload-148352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p no-preload-148352 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                                    │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                         │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ image   │ embed-certs-684693 image list --format=json                                                                                                                                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p embed-certs-684693 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p auto-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:03:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:03:44.086595  301873 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:44.086859  301873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:44.086871  301873 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:44.086878  301873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:44.087111  301873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:44.087594  301873 out.go:368] Setting JSON to false
	I1225 19:03:44.088720  301873 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2772,"bootTime":1766686652,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:03:44.088785  301873 start.go:143] virtualization: kvm guest
	I1225 19:03:44.090796  301873 out.go:179] * [auto-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:03:44.092111  301873 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:03:44.092107  301873 notify.go:221] Checking for updates...
	I1225 19:03:44.094467  301873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:03:44.095755  301873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:44.099210  301873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:03:44.100371  301873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:03:44.101874  301873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:03:44.103483  301873 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:44.103570  301873 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:44.103656  301873 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:44.103738  301873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:03:44.127458  301873 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:03:44.127544  301873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:44.198134  301873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:44.184550133 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:44.198244  301873 docker.go:319] overlay module found
	I1225 19:03:44.200674  301873 out.go:179] * Using the docker driver based on user configuration
	I1225 19:03:44.201852  301873 start.go:309] selected driver: docker
	I1225 19:03:44.201867  301873 start.go:928] validating driver "docker" against <nil>
	I1225 19:03:44.201878  301873 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:03:44.202524  301873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:44.283363  301873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:44.269622488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:44.283763  301873 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:03:44.284058  301873 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:03:44.285693  301873 out.go:179] * Using Docker driver with root privileges
	I1225 19:03:44.286931  301873 cni.go:84] Creating CNI manager for ""
	I1225 19:03:44.287016  301873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:44.287032  301873 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:03:44.287138  301873 start.go:353] cluster config:
	{Name:auto-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1225 19:03:44.288352  301873 out.go:179] * Starting "auto-910464" primary control-plane node in "auto-910464" cluster
	I1225 19:03:44.290546  301873 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:03:44.291703  301873 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:03:44.293227  301873 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:44.293261  301873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:03:44.293269  301873 cache.go:65] Caching tarball of preloaded images
	I1225 19:03:44.293317  301873 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:03:44.293345  301873 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:03:44.293352  301873 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:03:44.293438  301873 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/config.json ...
	I1225 19:03:44.293452  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/config.json: {Name:mka1a95cebb2cfb817db063373335d3e0e1b02cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:44.325852  301873 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:03:44.325877  301873 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:03:44.325917  301873 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:03:44.325956  301873 start.go:360] acquireMachinesLock for auto-910464: {Name:mka875ee821acfdcf577182c3e0b1307ceee44bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:03:44.326055  301873 start.go:364] duration metric: took 81.481µs to acquireMachinesLock for "auto-910464"
	I1225 19:03:44.326083  301873 start.go:93] Provisioning new machine with config: &{Name:auto-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:44.326162  301873 start.go:125] createHost starting for "" (driver="docker")
	I1225 19:03:44.209984  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:44.210413  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:44.210473  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:44.210529  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:44.253693  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:44.253716  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:44.253722  260034 cri.go:96] found id: ""
	I1225 19:03:44.253731  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:44.253793  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.258755  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.264497  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:44.264578  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:44.304426  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:44.304454  260034 cri.go:96] found id: ""
	I1225 19:03:44.304464  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:44.304523  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.310097  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:44.310165  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:44.343141  260034 cri.go:96] found id: ""
	I1225 19:03:44.343167  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.343178  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:44.343194  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:44.343252  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:44.381196  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:44.381266  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:44.381276  260034 cri.go:96] found id: ""
	I1225 19:03:44.381285  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:44.381339  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.386103  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.390625  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:44.390733  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:44.426754  260034 cri.go:96] found id: ""
	I1225 19:03:44.426781  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.426791  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:44.426801  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:44.426853  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:44.458800  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:44.458821  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:44.458827  260034 cri.go:96] found id: ""
	I1225 19:03:44.458835  260034 logs.go:282] 2 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:44.458902  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.463190  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.466836  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:44.466917  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:44.494212  260034 cri.go:96] found id: ""
	I1225 19:03:44.494238  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.494249  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:44.494256  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:44.494319  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:44.529238  260034 cri.go:96] found id: ""
	I1225 19:03:44.529264  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.529275  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:44.529286  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:44.529300  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:44.635557  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:44.635588  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:44.679662  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:44.679701  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:44.715587  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:44.715621  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:44.747850  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:03:44.747882  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:44.783735  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:44.783770  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:44.860831  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:44.860865  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:44.877191  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:44.877219  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:44.958743  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:44.958765  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:44.958783  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:44.997866  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:44.997934  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:45.039406  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:45.039446  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:45.068748  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:45.068788  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:43.307780  296906 out.go:252]   - Booting up control plane ...
	I1225 19:03:43.307943  296906 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 19:03:43.308077  296906 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 19:03:43.308818  296906 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 19:03:43.325165  296906 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 19:03:43.325305  296906 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 19:03:43.332668  296906 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 19:03:43.333099  296906 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 19:03:43.333153  296906 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 19:03:43.462050  296906 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 19:03:43.462201  296906 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 19:03:43.963618  296906 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.758728ms
	I1225 19:03:43.969109  296906 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 19:03:43.969222  296906 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1225 19:03:43.969336  296906 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 19:03:43.969433  296906 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1225 19:03:44.474017  296906 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.988722ms
	I1225 19:03:45.626482  296906 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.657634904s
	I1225 19:03:44.328234  301873 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:03:44.328476  301873 start.go:159] libmachine.API.Create for "auto-910464" (driver="docker")
	I1225 19:03:44.328509  301873 client.go:173] LocalClient.Create starting
	I1225 19:03:44.328585  301873 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:03:44.328626  301873 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:44.328649  301873 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:44.328711  301873 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:03:44.328739  301873 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:44.328769  301873 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:44.329254  301873 cli_runner.go:164] Run: docker network inspect auto-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:03:44.352080  301873 cli_runner.go:211] docker network inspect auto-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:03:44.352160  301873 network_create.go:284] running [docker network inspect auto-910464] to gather additional debugging logs...
	I1225 19:03:44.352182  301873 cli_runner.go:164] Run: docker network inspect auto-910464
	W1225 19:03:44.373100  301873 cli_runner.go:211] docker network inspect auto-910464 returned with exit code 1
	I1225 19:03:44.373185  301873 network_create.go:287] error running [docker network inspect auto-910464]: docker network inspect auto-910464: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-910464 not found
	I1225 19:03:44.373232  301873 network_create.go:289] output of [docker network inspect auto-910464]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-910464 not found
	
	** /stderr **
	I1225 19:03:44.373367  301873 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:44.396409  301873 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:03:44.397335  301873 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:03:44.398270  301873 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:03:44.399251  301873 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb3fc0}
	I1225 19:03:44.399287  301873 network_create.go:124] attempt to create docker network auto-910464 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1225 19:03:44.399339  301873 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-910464 auto-910464
	I1225 19:03:44.455995  301873 network_create.go:108] docker network auto-910464 192.168.76.0/24 created
	I1225 19:03:44.456040  301873 kic.go:121] calculated static IP "192.168.76.2" for the "auto-910464" container
	I1225 19:03:44.456221  301873 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:03:44.478315  301873 cli_runner.go:164] Run: docker volume create auto-910464 --label name.minikube.sigs.k8s.io=auto-910464 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:03:44.498717  301873 oci.go:103] Successfully created a docker volume auto-910464
	I1225 19:03:44.498793  301873 cli_runner.go:164] Run: docker run --rm --name auto-910464-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-910464 --entrypoint /usr/bin/test -v auto-910464:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:03:44.940812  301873 oci.go:107] Successfully prepared a docker volume auto-910464
	I1225 19:03:44.940933  301873 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:44.940952  301873 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:03:44.941030  301873 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:03:48.960112  301873 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.019018091s)
	I1225 19:03:48.960147  301873 kic.go:203] duration metric: took 4.019190862s to extract preloaded images to volume ...
	W1225 19:03:48.960256  301873 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:03:48.960306  301873 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:03:48.960355  301873 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:03:49.022647  301873 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-910464 --name auto-910464 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-910464 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-910464 --network auto-910464 --ip 192.168.76.2 --volume auto-910464:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 19:03:49.470237  296906 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50122865s
	I1225 19:03:49.487657  296906 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 19:03:49.498614  296906 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 19:03:49.507923  296906 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 19:03:49.508216  296906 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-731832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 19:03:49.515694  296906 kubeadm.go:319] [bootstrap-token] Using token: udpeqs.veox05vjjorcq7oi
	I1225 19:03:49.524372  296906 out.go:252]   - Configuring RBAC rules ...
	I1225 19:03:49.524528  296906 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 19:03:49.527010  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 19:03:49.534162  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 19:03:49.537554  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 19:03:49.540679  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 19:03:49.543914  296906 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 19:03:49.877344  296906 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 19:03:50.293411  296906 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1225 19:03:50.877291  296906 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1225 19:03:50.878681  296906 kubeadm.go:319] 
	I1225 19:03:50.878790  296906 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1225 19:03:50.878805  296906 kubeadm.go:319] 
	I1225 19:03:50.878932  296906 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1225 19:03:50.878950  296906 kubeadm.go:319] 
	I1225 19:03:50.878995  296906 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1225 19:03:50.879073  296906 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 19:03:50.879145  296906 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 19:03:50.879164  296906 kubeadm.go:319] 
	I1225 19:03:50.879235  296906 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1225 19:03:50.879245  296906 kubeadm.go:319] 
	I1225 19:03:50.879307  296906 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 19:03:50.879318  296906 kubeadm.go:319] 
	I1225 19:03:50.879388  296906 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1225 19:03:50.879490  296906 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 19:03:50.879585  296906 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 19:03:50.879591  296906 kubeadm.go:319] 
	I1225 19:03:50.879711  296906 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 19:03:50.879814  296906 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1225 19:03:50.879820  296906 kubeadm.go:319] 
	I1225 19:03:50.879957  296906 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token udpeqs.veox05vjjorcq7oi \
	I1225 19:03:50.880096  296906 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 \
	I1225 19:03:50.880127  296906 kubeadm.go:319] 	--control-plane 
	I1225 19:03:50.880132  296906 kubeadm.go:319] 
	I1225 19:03:50.880248  296906 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1225 19:03:50.880255  296906 kubeadm.go:319] 
	I1225 19:03:50.880371  296906 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token udpeqs.veox05vjjorcq7oi \
	I1225 19:03:50.880523  296906 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 
	I1225 19:03:50.884244  296906 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 19:03:50.884407  296906 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 19:03:50.884430  296906 cni.go:84] Creating CNI manager for ""
	I1225 19:03:50.884442  296906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:50.886134  296906 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 25 19:03:38 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:38.290058678Z" level=info msg="Starting container: 7554d714694d0de5158e7090b045bd77d512127844f593fefdf5d03aa753cf20" id=c454946a-eb74-473f-8166-4c4b0c39ec7c name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:38 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:38.292684215Z" level=info msg="Started container" PID=1917 containerID=7554d714694d0de5158e7090b045bd77d512127844f593fefdf5d03aa753cf20 description=kube-system/coredns-66bc5c9577-c9wmz/coredns id=c454946a-eb74-473f-8166-4c4b0c39ec7c name=/runtime.v1.RuntimeService/StartContainer sandboxID=61fff8f77febb0cbdf7d97ef1902094d856d2d0410538add2ca066cec089973f
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.826542665Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5b7bda0d-c442-40c2-a299-138b52eb03c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.826615491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.830886938Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dd5d3a9262ee60be0a46100fb3a004516ef80ddbca5bf42c1ec736af462ca32f UID:0defcc3b-45da-4e19-8614-16aacfe1ebfd NetNS:/var/run/netns/26d6f952-4170-4e0d-ab6e-b0bd32c52142 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f94270}] Aliases:map[]}"
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.830935343Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.840476475Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dd5d3a9262ee60be0a46100fb3a004516ef80ddbca5bf42c1ec736af462ca32f UID:0defcc3b-45da-4e19-8614-16aacfe1ebfd NetNS:/var/run/netns/26d6f952-4170-4e0d-ab6e-b0bd32c52142 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f94270}] Aliases:map[]}"
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.840658549Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.8415738Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.842796674Z" level=info msg="Ran pod sandbox dd5d3a9262ee60be0a46100fb3a004516ef80ddbca5bf42c1ec736af462ca32f with infra container: default/busybox/POD" id=5b7bda0d-c442-40c2-a299-138b52eb03c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.844029466Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e858e694-5372-40ac-bffc-4686c356c041 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.844142328Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e858e694-5372-40ac-bffc-4686c356c041 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.844175322Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e858e694-5372-40ac-bffc-4686c356c041 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.844763545Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2856fb43-9755-4346-92ea-14ade7bde863 name=/runtime.v1.ImageService/PullImage
	Dec 25 19:03:41 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:41.846107194Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.19009952Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2856fb43-9755-4346-92ea-14ade7bde863 name=/runtime.v1.ImageService/PullImage
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.190787485Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a963ae28-4654-41f1-84cb-9d90a984460b name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.192167176Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a703526-01e2-494b-9350-17eabf12f1b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.196807924Z" level=info msg="Creating container: default/busybox/busybox" id=6fc3debb-7024-4e09-8f5a-3145f0893c22 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.196951635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.201532218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.202065235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.230420349Z" level=info msg="Created container 106e20f9d1ab4bdcc20148d7fe0cadb78eca0aac91241c45e02179c3d5a47b82: default/busybox/busybox" id=6fc3debb-7024-4e09-8f5a-3145f0893c22 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.231131157Z" level=info msg="Starting container: 106e20f9d1ab4bdcc20148d7fe0cadb78eca0aac91241c45e02179c3d5a47b82" id=7b37443c-450c-4602-943c-18f63609e4ad name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:43 default-k8s-diff-port-960022 crio[786]: time="2025-12-25T19:03:43.233329967Z" level=info msg="Started container" PID=1994 containerID=106e20f9d1ab4bdcc20148d7fe0cadb78eca0aac91241c45e02179c3d5a47b82 description=default/busybox/busybox id=7b37443c-450c-4602-943c-18f63609e4ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd5d3a9262ee60be0a46100fb3a004516ef80ddbca5bf42c1ec736af462ca32f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	106e20f9d1ab4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   dd5d3a9262ee6       busybox                                                default
	7554d714694d0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   61fff8f77febb       coredns-66bc5c9577-c9wmz                               kube-system
	537fb1330d8cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   d1348b73f1a3f       storage-provisioner                                    kube-system
	758faa0dc9cff       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   aa244b2724810       kindnet-hj6rr                                          kube-system
	e6118ad06193f       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      26 seconds ago      Running             kube-proxy                0                   e6958f8d0e43c       kube-proxy-wl784                                       kube-system
	dd24199a1a8bf       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      35 seconds ago      Running             kube-scheduler            0                   678fcf28eeeb4       kube-scheduler-default-k8s-diff-port-960022            kube-system
	4e34aae15b3f2       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      35 seconds ago      Running             kube-controller-manager   0                   b8c04659729cc       kube-controller-manager-default-k8s-diff-port-960022   kube-system
	6d85eca14599e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   bb54377c89b17       etcd-default-k8s-diff-port-960022                      kube-system
	8031a1829342c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      35 seconds ago      Running             kube-apiserver            0                   5f1d61fda9d6d       kube-apiserver-default-k8s-diff-port-960022            kube-system
	
	
	==> coredns [7554d714694d0de5158e7090b045bd77d512127844f593fefdf5d03aa753cf20] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59317 - 23709 "HINFO IN 7775501201193556423.488172274351635214. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036518135s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-960022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-960022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=default-k8s-diff-port-960022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_03_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:03:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-960022
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:03:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-960022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                66f57d40-b312-40d1-9a39-442700171c0b
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-c9wmz                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-default-k8s-diff-port-960022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-hj6rr                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-960022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-960022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-wl784                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-960022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node default-k8s-diff-port-960022 event: Registered Node default-k8s-diff-port-960022 in Controller
	  Normal  NodeReady                15s   kubelet          Node default-k8s-diff-port-960022 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [6d85eca14599e28b9cf80a2291f60f35052f44da474a31cd44c47cc812966503] <==
	{"level":"warn","ts":"2025-12-25T19:03:17.022611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.029969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.036203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.043303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.050325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.057463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.064045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.071082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.089024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.095732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.101796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.108155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.115115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.121575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.128260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.135445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.142323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.148507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.155323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.162905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.176440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.184647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.191718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:03:17.243643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58714","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-25T19:03:48.529251Z","caller":"traceutil/trace.go:172","msg":"trace[1511085080] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"158.584759ms","start":"2025-12-25T19:03:48.370650Z","end":"2025-12-25T19:03:48.529235Z","steps":["trace[1511085080] 'process raft request'  (duration: 158.437989ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:03:52 up 46 min,  0 user,  load average: 3.03, 2.56, 1.84
	Linux default-k8s-diff-port-960022 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [758faa0dc9cffc5f5282d7d67a3bb58ef7dae72769080b33132b5e969a760f8a] <==
	I1225 19:03:27.476378       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:03:27.476700       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1225 19:03:27.476873       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:03:27.476919       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:03:27.476947       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:03:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:03:27.680107       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:03:27.680139       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:03:27.680153       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:03:27.763885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:03:28.163538       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:03:28.163610       1 metrics.go:72] Registering metrics
	I1225 19:03:28.163756       1 controller.go:711] "Syncing nftables rules"
	I1225 19:03:37.681098       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:03:37.681184       1 main.go:301] handling current node
	I1225 19:03:47.681161       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:03:47.681203       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8031a1829342c3a33d0d75770510801ca024673bc8ad50b7ba91e3c0a4d4bec3] <==
	I1225 19:03:17.710233       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:03:17.710221       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1225 19:03:17.710404       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:03:17.711714       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1225 19:03:17.716994       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 19:03:17.729376       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:17.737502       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:03:18.610146       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1225 19:03:18.613811       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1225 19:03:18.613830       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:03:19.044443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:03:19.079005       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:03:19.116403       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 19:03:19.121653       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1225 19:03:19.122516       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:03:19.126155       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:03:19.629606       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:03:20.200631       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:03:20.208736       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 19:03:20.216133       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1225 19:03:25.234689       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:25.238504       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:25.530999       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1225 19:03:25.735260       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1225 19:03:50.617600       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:43376: use of closed network connection
	
	
	==> kube-controller-manager [4e34aae15b3f237a5dc6b6306defe9217dcd10a45010eddcad509351e30f2d68] <==
	I1225 19:03:24.627039       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-960022"
	I1225 19:03:24.627109       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1225 19:03:24.628015       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1225 19:03:24.628037       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 19:03:24.628236       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1225 19:03:24.628580       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1225 19:03:24.628656       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1225 19:03:24.628692       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1225 19:03:24.628699       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1225 19:03:24.628782       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1225 19:03:24.628991       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1225 19:03:24.629354       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1225 19:03:24.631138       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1225 19:03:24.631199       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1225 19:03:24.631271       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1225 19:03:24.631282       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1225 19:03:24.631288       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1225 19:03:24.633263       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:03:24.638344       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-960022" podCIDRs=["10.244.0.0/24"]
	I1225 19:03:24.638558       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:03:24.644710       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1225 19:03:24.651033       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1225 19:03:24.655319       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1225 19:03:24.662665       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1225 19:03:39.629432       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e6118ad06193f33c8a2b365852a64b7812b704258d9fd306f7206a9722284ec2] <==
	I1225 19:03:25.994511       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:03:26.065395       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 19:03:26.165552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 19:03:26.165591       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1225 19:03:26.165706       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:03:26.190127       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:03:26.190216       1 server_linux.go:132] "Using iptables Proxier"
	I1225 19:03:26.197442       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:03:26.197974       1 server.go:527] "Version info" version="v1.34.3"
	I1225 19:03:26.197996       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:03:26.200582       1 config.go:200] "Starting service config controller"
	I1225 19:03:26.200607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:03:26.200676       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:03:26.200829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:03:26.201337       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:03:26.202399       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:03:26.202271       1 config.go:309] "Starting node config controller"
	I1225 19:03:26.206096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:03:26.206128       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:03:26.301927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:03:26.302073       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:03:26.306378       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dd24199a1a8bfb16e03eb0b7f6a5c09ae2534b24595eee487e570fc3ca524728] <==
	E1225 19:03:17.659149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1225 19:03:17.659178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1225 19:03:17.659231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1225 19:03:17.659233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1225 19:03:17.659258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 19:03:17.659302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 19:03:17.659333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1225 19:03:17.659376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1225 19:03:17.659382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1225 19:03:17.659454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 19:03:17.659645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 19:03:17.659800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 19:03:17.659842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1225 19:03:17.660308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1225 19:03:18.506401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1225 19:03:18.562086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1225 19:03:18.654410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1225 19:03:18.689830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1225 19:03:18.691838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1225 19:03:18.709253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1225 19:03:18.721072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1225 19:03:18.743158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1225 19:03:18.831708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1225 19:03:18.855017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1225 19:03:19.255258       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 19:03:21 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:21.068852    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-960022" podStartSLOduration=1.068827738 podStartE2EDuration="1.068827738s" podCreationTimestamp="2025-12-25 19:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:21.057649343 +0000 UTC m=+1.124313625" watchObservedRunningTime="2025-12-25 19:03:21.068827738 +0000 UTC m=+1.135491939"
	Dec 25 19:03:21 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:21.089682    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-960022" podStartSLOduration=1.089656538 podStartE2EDuration="1.089656538s" podCreationTimestamp="2025-12-25 19:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:21.069029698 +0000 UTC m=+1.135693895" watchObservedRunningTime="2025-12-25 19:03:21.089656538 +0000 UTC m=+1.156320734"
	Dec 25 19:03:21 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:21.089881    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-960022" podStartSLOduration=1.089874381 podStartE2EDuration="1.089874381s" podCreationTimestamp="2025-12-25 19:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:21.089848926 +0000 UTC m=+1.156513127" watchObservedRunningTime="2025-12-25 19:03:21.089874381 +0000 UTC m=+1.156538584"
	Dec 25 19:03:21 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:21.114150    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-960022" podStartSLOduration=1.114126082 podStartE2EDuration="1.114126082s" podCreationTimestamp="2025-12-25 19:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:21.103960274 +0000 UTC m=+1.170624475" watchObservedRunningTime="2025-12-25 19:03:21.114126082 +0000 UTC m=+1.180790281"
	Dec 25 19:03:24 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:24.738927    1326 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 25 19:03:24 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:24.739638    1326 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.636789    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcjs\" (UniqueName: \"kubernetes.io/projected/74edb28b-8829-4f8f-b2e9-caa22db0d2f6-kube-api-access-xkcjs\") pod \"kindnet-hj6rr\" (UID: \"74edb28b-8829-4f8f-b2e9-caa22db0d2f6\") " pod="kube-system/kindnet-hj6rr"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.636834    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11627834-f71f-4055-a738-189d56587a73-xtables-lock\") pod \"kube-proxy-wl784\" (UID: \"11627834-f71f-4055-a738-189d56587a73\") " pod="kube-system/kube-proxy-wl784"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.636856    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74edb28b-8829-4f8f-b2e9-caa22db0d2f6-xtables-lock\") pod \"kindnet-hj6rr\" (UID: \"74edb28b-8829-4f8f-b2e9-caa22db0d2f6\") " pod="kube-system/kindnet-hj6rr"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.636870    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74edb28b-8829-4f8f-b2e9-caa22db0d2f6-lib-modules\") pod \"kindnet-hj6rr\" (UID: \"74edb28b-8829-4f8f-b2e9-caa22db0d2f6\") " pod="kube-system/kindnet-hj6rr"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.636885    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11627834-f71f-4055-a738-189d56587a73-lib-modules\") pod \"kube-proxy-wl784\" (UID: \"11627834-f71f-4055-a738-189d56587a73\") " pod="kube-system/kube-proxy-wl784"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.636934    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtkfg\" (UniqueName: \"kubernetes.io/projected/11627834-f71f-4055-a738-189d56587a73-kube-api-access-jtkfg\") pod \"kube-proxy-wl784\" (UID: \"11627834-f71f-4055-a738-189d56587a73\") " pod="kube-system/kube-proxy-wl784"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.637014    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11627834-f71f-4055-a738-189d56587a73-kube-proxy\") pod \"kube-proxy-wl784\" (UID: \"11627834-f71f-4055-a738-189d56587a73\") " pod="kube-system/kube-proxy-wl784"
	Dec 25 19:03:25 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:25.637046    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/74edb28b-8829-4f8f-b2e9-caa22db0d2f6-cni-cfg\") pod \"kindnet-hj6rr\" (UID: \"74edb28b-8829-4f8f-b2e9-caa22db0d2f6\") " pod="kube-system/kindnet-hj6rr"
	Dec 25 19:03:26 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:26.595246    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wl784" podStartSLOduration=1.595221937 podStartE2EDuration="1.595221937s" podCreationTimestamp="2025-12-25 19:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:26.060695224 +0000 UTC m=+6.127359427" watchObservedRunningTime="2025-12-25 19:03:26.595221937 +0000 UTC m=+6.661886139"
	Dec 25 19:03:30 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:30.373747    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hj6rr" podStartSLOduration=4.066778549 podStartE2EDuration="5.373723596s" podCreationTimestamp="2025-12-25 19:03:25 +0000 UTC" firstStartedPulling="2025-12-25 19:03:25.872558887 +0000 UTC m=+5.939223079" lastFinishedPulling="2025-12-25 19:03:27.179503936 +0000 UTC m=+7.246168126" observedRunningTime="2025-12-25 19:03:28.062232186 +0000 UTC m=+8.128896387" watchObservedRunningTime="2025-12-25 19:03:30.373723596 +0000 UTC m=+10.440387791"
	Dec 25 19:03:37 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:37.825010    1326 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 25 19:03:37 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:37.932109    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9266e19b-60fd-4de1-bf8a-1998b627e8ed-tmp\") pod \"storage-provisioner\" (UID: \"9266e19b-60fd-4de1-bf8a-1998b627e8ed\") " pod="kube-system/storage-provisioner"
	Dec 25 19:03:37 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:37.932373    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/773864bb-884f-4d15-9364-d587199c3d06-config-volume\") pod \"coredns-66bc5c9577-c9wmz\" (UID: \"773864bb-884f-4d15-9364-d587199c3d06\") " pod="kube-system/coredns-66bc5c9577-c9wmz"
	Dec 25 19:03:37 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:37.932413    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjsq\" (UniqueName: \"kubernetes.io/projected/773864bb-884f-4d15-9364-d587199c3d06-kube-api-access-znjsq\") pod \"coredns-66bc5c9577-c9wmz\" (UID: \"773864bb-884f-4d15-9364-d587199c3d06\") " pod="kube-system/coredns-66bc5c9577-c9wmz"
	Dec 25 19:03:37 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:37.932444    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dn5h\" (UniqueName: \"kubernetes.io/projected/9266e19b-60fd-4de1-bf8a-1998b627e8ed-kube-api-access-6dn5h\") pod \"storage-provisioner\" (UID: \"9266e19b-60fd-4de1-bf8a-1998b627e8ed\") " pod="kube-system/storage-provisioner"
	Dec 25 19:03:39 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:39.103101    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c9wmz" podStartSLOduration=14.103076663 podStartE2EDuration="14.103076663s" podCreationTimestamp="2025-12-25 19:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:39.089743498 +0000 UTC m=+19.156407702" watchObservedRunningTime="2025-12-25 19:03:39.103076663 +0000 UTC m=+19.169740865"
	Dec 25 19:03:39 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:39.113556    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.113532613 podStartE2EDuration="13.113532613s" podCreationTimestamp="2025-12-25 19:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:39.113387851 +0000 UTC m=+19.180052064" watchObservedRunningTime="2025-12-25 19:03:39.113532613 +0000 UTC m=+19.180196815"
	Dec 25 19:03:41 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:41.557029    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vqg\" (UniqueName: \"kubernetes.io/projected/0defcc3b-45da-4e19-8614-16aacfe1ebfd-kube-api-access-z8vqg\") pod \"busybox\" (UID: \"0defcc3b-45da-4e19-8614-16aacfe1ebfd\") " pod="default/busybox"
	Dec 25 19:03:44 default-k8s-diff-port-960022 kubelet[1326]: I1225 19:03:44.101820    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.754596465 podStartE2EDuration="3.1018014s" podCreationTimestamp="2025-12-25 19:03:41 +0000 UTC" firstStartedPulling="2025-12-25 19:03:41.844412163 +0000 UTC m=+21.911076359" lastFinishedPulling="2025-12-25 19:03:43.19161711 +0000 UTC m=+23.258281294" observedRunningTime="2025-12-25 19:03:44.101565699 +0000 UTC m=+24.168229884" watchObservedRunningTime="2025-12-25 19:03:44.1018014 +0000 UTC m=+24.168465601"
	
	
	==> storage-provisioner [537fb1330d8cc2e7d66935254ea4a10d86bfb8fd9a68eb2ec38b9f4fa0be689a] <==
	I1225 19:03:38.283628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:03:38.302208       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:03:38.302447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:03:38.305758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:38.315327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:38.315548       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:03:38.316071       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7550cadf-4431-4746-a11e-df2346058022", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-960022_2c9e6a99-d013-4859-8eb3-8142b48c73e2 became leader
	I1225 19:03:38.316152       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-960022_2c9e6a99-d013-4859-8eb3-8142b48c73e2!
	W1225 19:03:38.322770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:38.331205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:03:38.416996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-960022_2c9e6a99-d013-4859-8eb3-8142b48c73e2!
	W1225 19:03:40.335085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:40.340411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:42.343622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:42.347711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:44.352143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:44.357092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:46.360823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:46.365475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:48.368690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:48.530363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:50.534616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:03:50.540291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-731832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-731832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (287.332102ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:03:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-731832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-731832
helpers_test.go:244: (dbg) docker inspect newest-cni-731832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81",
	        "Created": "2025-12-25T19:03:37.514242235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 298313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:03:37.558655871Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/hosts",
	        "LogPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81-json.log",
	        "Name": "/newest-cni-731832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-731832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-731832",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81",
	                "LowerDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-731832",
	                "Source": "/var/lib/docker/volumes/newest-cni-731832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-731832",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-731832",
	                "name.minikube.sigs.k8s.io": "newest-cni-731832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "16614da1f291f48e340667479e50823bd6b209c1b2d9473cf7a511df303df06b",
	            "SandboxKey": "/var/run/docker/netns/16614da1f291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-731832": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "360ef2d655feed4b5ef1f2b45737dda354b50d02cd936b222228be43a9a6ef2b",
	                    "EndpointID": "408fdb1264e1f555f7aa8683b0c5d072988a742974ad17ffe4b5c12ccbe3ce2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6a:65:40:e5:9e:3b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-731832",
	                        "0d7dffda1d2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-731832 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-731832 logs -n 25: (1.03724756s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p embed-certs-684693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ stop    │ -p embed-certs-684693 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ addons  │ enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ start   │ -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ old-k8s-version-163446 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:02 UTC │
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                                    │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                         │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ embed-certs-684693 image list --format=json                                                                                                                                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p embed-certs-684693 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p auto-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-960022 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-731832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:03:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:03:44.086595  301873 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:03:44.086859  301873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:44.086871  301873 out.go:374] Setting ErrFile to fd 2...
	I1225 19:03:44.086878  301873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:03:44.087111  301873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:03:44.087594  301873 out.go:368] Setting JSON to false
	I1225 19:03:44.088720  301873 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2772,"bootTime":1766686652,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:03:44.088785  301873 start.go:143] virtualization: kvm guest
	I1225 19:03:44.090796  301873 out.go:179] * [auto-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:03:44.092111  301873 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:03:44.092107  301873 notify.go:221] Checking for updates...
	I1225 19:03:44.094467  301873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:03:44.095755  301873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:44.099210  301873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:03:44.100371  301873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:03:44.101874  301873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:03:44.103483  301873 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:44.103570  301873 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:44.103656  301873 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:44.103738  301873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:03:44.127458  301873 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:03:44.127544  301873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:44.198134  301873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:44.184550133 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:44.198244  301873 docker.go:319] overlay module found
	I1225 19:03:44.200674  301873 out.go:179] * Using the docker driver based on user configuration
	I1225 19:03:44.201852  301873 start.go:309] selected driver: docker
	I1225 19:03:44.201867  301873 start.go:928] validating driver "docker" against <nil>
	I1225 19:03:44.201878  301873 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:03:44.202524  301873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:03:44.283363  301873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:03:44.269622488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:03:44.283763  301873 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:03:44.284058  301873 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:03:44.285693  301873 out.go:179] * Using Docker driver with root privileges
	I1225 19:03:44.286931  301873 cni.go:84] Creating CNI manager for ""
	I1225 19:03:44.287016  301873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:44.287032  301873 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1225 19:03:44.287138  301873 start.go:353] cluster config:
	{Name:auto-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1225 19:03:44.288352  301873 out.go:179] * Starting "auto-910464" primary control-plane node in "auto-910464" cluster
	I1225 19:03:44.290546  301873 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:03:44.291703  301873 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:03:44.293227  301873 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:44.293261  301873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:03:44.293269  301873 cache.go:65] Caching tarball of preloaded images
	I1225 19:03:44.293317  301873 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:03:44.293345  301873 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:03:44.293352  301873 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:03:44.293438  301873 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/config.json ...
	I1225 19:03:44.293452  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/config.json: {Name:mka1a95cebb2cfb817db063373335d3e0e1b02cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:44.325852  301873 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:03:44.325877  301873 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:03:44.325917  301873 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:03:44.325956  301873 start.go:360] acquireMachinesLock for auto-910464: {Name:mka875ee821acfdcf577182c3e0b1307ceee44bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:03:44.326055  301873 start.go:364] duration metric: took 81.481µs to acquireMachinesLock for "auto-910464"
	I1225 19:03:44.326083  301873 start.go:93] Provisioning new machine with config: &{Name:auto-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:44.326162  301873 start.go:125] createHost starting for "" (driver="docker")
	I1225 19:03:44.209984  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:44.210413  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:44.210473  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:44.210529  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:44.253693  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:44.253716  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:44.253722  260034 cri.go:96] found id: ""
	I1225 19:03:44.253731  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:44.253793  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.258755  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.264497  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:44.264578  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:44.304426  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:44.304454  260034 cri.go:96] found id: ""
	I1225 19:03:44.304464  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:44.304523  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.310097  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:44.310165  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:44.343141  260034 cri.go:96] found id: ""
	I1225 19:03:44.343167  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.343178  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:44.343194  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:44.343252  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:44.381196  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:44.381266  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:44.381276  260034 cri.go:96] found id: ""
	I1225 19:03:44.381285  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:44.381339  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.386103  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.390625  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:44.390733  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:44.426754  260034 cri.go:96] found id: ""
	I1225 19:03:44.426781  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.426791  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:44.426801  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:44.426853  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:44.458800  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:44.458821  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:44.458827  260034 cri.go:96] found id: ""
	I1225 19:03:44.458835  260034 logs.go:282] 2 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:44.458902  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.463190  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:44.466836  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:44.466917  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:44.494212  260034 cri.go:96] found id: ""
	I1225 19:03:44.494238  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.494249  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:44.494256  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:44.494319  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:44.529238  260034 cri.go:96] found id: ""
	I1225 19:03:44.529264  260034 logs.go:282] 0 containers: []
	W1225 19:03:44.529275  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:44.529286  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:44.529300  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:44.635557  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:44.635588  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:44.679662  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:44.679701  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:44.715587  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:44.715621  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:44.747850  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:03:44.747882  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:44.783735  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:44.783770  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:44.860831  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:44.860865  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:44.877191  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:44.877219  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:44.958743  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:44.958765  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:44.958783  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:44.997866  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:44.997934  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:45.039406  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:45.039446  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:45.068748  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:45.068788  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:43.307780  296906 out.go:252]   - Booting up control plane ...
	I1225 19:03:43.307943  296906 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 19:03:43.308077  296906 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 19:03:43.308818  296906 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 19:03:43.325165  296906 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 19:03:43.325305  296906 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 19:03:43.332668  296906 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 19:03:43.333099  296906 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 19:03:43.333153  296906 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 19:03:43.462050  296906 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 19:03:43.462201  296906 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 19:03:43.963618  296906 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.758728ms
	I1225 19:03:43.969109  296906 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 19:03:43.969222  296906 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1225 19:03:43.969336  296906 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 19:03:43.969433  296906 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1225 19:03:44.474017  296906 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.988722ms
	I1225 19:03:45.626482  296906 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.657634904s
	I1225 19:03:44.328234  301873 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:03:44.328476  301873 start.go:159] libmachine.API.Create for "auto-910464" (driver="docker")
	I1225 19:03:44.328509  301873 client.go:173] LocalClient.Create starting
	I1225 19:03:44.328585  301873 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:03:44.328626  301873 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:44.328649  301873 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:44.328711  301873 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:03:44.328739  301873 main.go:144] libmachine: Decoding PEM data...
	I1225 19:03:44.328769  301873 main.go:144] libmachine: Parsing certificate...
	I1225 19:03:44.329254  301873 cli_runner.go:164] Run: docker network inspect auto-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:03:44.352080  301873 cli_runner.go:211] docker network inspect auto-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:03:44.352160  301873 network_create.go:284] running [docker network inspect auto-910464] to gather additional debugging logs...
	I1225 19:03:44.352182  301873 cli_runner.go:164] Run: docker network inspect auto-910464
	W1225 19:03:44.373100  301873 cli_runner.go:211] docker network inspect auto-910464 returned with exit code 1
	I1225 19:03:44.373185  301873 network_create.go:287] error running [docker network inspect auto-910464]: docker network inspect auto-910464: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-910464 not found
	I1225 19:03:44.373232  301873 network_create.go:289] output of [docker network inspect auto-910464]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-910464 not found
	
	** /stderr **
	I1225 19:03:44.373367  301873 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:44.396409  301873 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:03:44.397335  301873 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:03:44.398270  301873 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:03:44.399251  301873 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb3fc0}
	I1225 19:03:44.399287  301873 network_create.go:124] attempt to create docker network auto-910464 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1225 19:03:44.399339  301873 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-910464 auto-910464
	I1225 19:03:44.455995  301873 network_create.go:108] docker network auto-910464 192.168.76.0/24 created
	I1225 19:03:44.456040  301873 kic.go:121] calculated static IP "192.168.76.2" for the "auto-910464" container
	I1225 19:03:44.456221  301873 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:03:44.478315  301873 cli_runner.go:164] Run: docker volume create auto-910464 --label name.minikube.sigs.k8s.io=auto-910464 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:03:44.498717  301873 oci.go:103] Successfully created a docker volume auto-910464
	I1225 19:03:44.498793  301873 cli_runner.go:164] Run: docker run --rm --name auto-910464-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-910464 --entrypoint /usr/bin/test -v auto-910464:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:03:44.940812  301873 oci.go:107] Successfully prepared a docker volume auto-910464
	I1225 19:03:44.940933  301873 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:44.940952  301873 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:03:44.941030  301873 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:03:48.960112  301873 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.019018091s)
	I1225 19:03:48.960147  301873 kic.go:203] duration metric: took 4.019190862s to extract preloaded images to volume ...
	W1225 19:03:48.960256  301873 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:03:48.960306  301873 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:03:48.960355  301873 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:03:49.022647  301873 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-910464 --name auto-910464 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-910464 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-910464 --network auto-910464 --ip 192.168.76.2 --volume auto-910464:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 19:03:49.470237  296906 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50122865s
	I1225 19:03:49.487657  296906 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 19:03:49.498614  296906 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 19:03:49.507923  296906 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 19:03:49.508216  296906 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-731832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 19:03:49.515694  296906 kubeadm.go:319] [bootstrap-token] Using token: udpeqs.veox05vjjorcq7oi
	I1225 19:03:49.524372  296906 out.go:252]   - Configuring RBAC rules ...
	I1225 19:03:49.524528  296906 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 19:03:49.527010  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 19:03:49.534162  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 19:03:49.537554  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 19:03:49.540679  296906 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 19:03:49.543914  296906 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 19:03:49.877344  296906 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 19:03:50.293411  296906 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1225 19:03:50.877291  296906 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1225 19:03:50.878681  296906 kubeadm.go:319] 
	I1225 19:03:50.878790  296906 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1225 19:03:50.878805  296906 kubeadm.go:319] 
	I1225 19:03:50.878932  296906 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1225 19:03:50.878950  296906 kubeadm.go:319] 
	I1225 19:03:50.878995  296906 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1225 19:03:50.879073  296906 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 19:03:50.879145  296906 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 19:03:50.879164  296906 kubeadm.go:319] 
	I1225 19:03:50.879235  296906 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1225 19:03:50.879245  296906 kubeadm.go:319] 
	I1225 19:03:50.879307  296906 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 19:03:50.879318  296906 kubeadm.go:319] 
	I1225 19:03:50.879388  296906 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1225 19:03:50.879490  296906 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 19:03:50.879585  296906 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 19:03:50.879591  296906 kubeadm.go:319] 
	I1225 19:03:50.879711  296906 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 19:03:50.879814  296906 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1225 19:03:50.879820  296906 kubeadm.go:319] 
	I1225 19:03:50.879957  296906 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token udpeqs.veox05vjjorcq7oi \
	I1225 19:03:50.880096  296906 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 \
	I1225 19:03:50.880127  296906 kubeadm.go:319] 	--control-plane 
	I1225 19:03:50.880132  296906 kubeadm.go:319] 
	I1225 19:03:50.880248  296906 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1225 19:03:50.880255  296906 kubeadm.go:319] 
	I1225 19:03:50.880371  296906 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token udpeqs.veox05vjjorcq7oi \
	I1225 19:03:50.880523  296906 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 
	I1225 19:03:50.884244  296906 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 19:03:50.884407  296906 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 19:03:50.884430  296906 cni.go:84] Creating CNI manager for ""
	I1225 19:03:50.884442  296906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:50.886134  296906 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1225 19:03:49.297213  301873 cli_runner.go:164] Run: docker container inspect auto-910464 --format={{.State.Running}}
	I1225 19:03:49.316675  301873 cli_runner.go:164] Run: docker container inspect auto-910464 --format={{.State.Status}}
	I1225 19:03:49.335977  301873 cli_runner.go:164] Run: docker exec auto-910464 stat /var/lib/dpkg/alternatives/iptables
	I1225 19:03:49.386511  301873 oci.go:144] the created container "auto-910464" has a running status.
	I1225 19:03:49.386554  301873 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa...
	I1225 19:03:49.624489  301873 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1225 19:03:49.656415  301873 cli_runner.go:164] Run: docker container inspect auto-910464 --format={{.State.Status}}
	I1225 19:03:49.683357  301873 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1225 19:03:49.683386  301873 kic_runner.go:114] Args: [docker exec --privileged auto-910464 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1225 19:03:49.734948  301873 cli_runner.go:164] Run: docker container inspect auto-910464 --format={{.State.Status}}
	I1225 19:03:49.754635  301873 machine.go:94] provisionDockerMachine start ...
	I1225 19:03:49.754730  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:49.772377  301873 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:49.772641  301873 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1225 19:03:49.772660  301873 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:03:49.898174  301873 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-910464
	
	I1225 19:03:49.898205  301873 ubuntu.go:182] provisioning hostname "auto-910464"
	I1225 19:03:49.898265  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:49.922618  301873 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:49.925788  301873 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1225 19:03:49.925839  301873 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-910464 && echo "auto-910464" | sudo tee /etc/hostname
	I1225 19:03:50.067635  301873 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-910464
	
	I1225 19:03:50.067733  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:50.088665  301873 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:50.088981  301873 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1225 19:03:50.089012  301873 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-910464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-910464/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-910464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:03:50.222286  301873 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:03:50.222309  301873 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:03:50.222340  301873 ubuntu.go:190] setting up certificates
	I1225 19:03:50.222350  301873 provision.go:84] configureAuth start
	I1225 19:03:50.222398  301873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-910464
	I1225 19:03:50.241764  301873 provision.go:143] copyHostCerts
	I1225 19:03:50.241823  301873 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:03:50.241836  301873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:03:50.241961  301873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:03:50.242086  301873 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:03:50.242101  301873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:03:50.242149  301873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:03:50.242247  301873 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:03:50.242257  301873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:03:50.242285  301873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:03:50.242398  301873 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.auto-910464 san=[127.0.0.1 192.168.76.2 auto-910464 localhost minikube]
	I1225 19:03:50.265006  301873 provision.go:177] copyRemoteCerts
	I1225 19:03:50.265070  301873 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:03:50.265116  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:50.287188  301873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa Username:docker}
	I1225 19:03:50.381291  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:03:50.401308  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1225 19:03:50.418712  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:03:50.436301  301873 provision.go:87] duration metric: took 213.926683ms to configureAuth
	I1225 19:03:50.436334  301873 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:03:50.436511  301873 config.go:182] Loaded profile config "auto-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:03:50.436638  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:50.456116  301873 main.go:144] libmachine: Using SSH client type: native
	I1225 19:03:50.456335  301873 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1225 19:03:50.456350  301873 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:03:50.749677  301873 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:03:50.749705  301873 machine.go:97] duration metric: took 995.048119ms to provisionDockerMachine
	I1225 19:03:50.749719  301873 client.go:176] duration metric: took 6.421202755s to LocalClient.Create
	I1225 19:03:50.749740  301873 start.go:167] duration metric: took 6.421261737s to libmachine.API.Create "auto-910464"
	I1225 19:03:50.749753  301873 start.go:293] postStartSetup for "auto-910464" (driver="docker")
	I1225 19:03:50.749766  301873 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:03:50.749833  301873 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:03:50.749869  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:50.770235  301873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa Username:docker}
	I1225 19:03:50.867612  301873 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:03:50.872264  301873 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:03:50.872293  301873 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:03:50.872306  301873 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:03:50.872364  301873 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:03:50.872482  301873 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:03:50.872635  301873 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:03:50.882719  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:03:50.907693  301873 start.go:296] duration metric: took 157.922987ms for postStartSetup
	I1225 19:03:50.908113  301873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-910464
	I1225 19:03:50.933248  301873 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/config.json ...
	I1225 19:03:50.933588  301873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:03:50.933647  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:50.957260  301873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa Username:docker}
	I1225 19:03:51.056569  301873 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:03:51.061737  301873 start.go:128] duration metric: took 6.73556165s to createHost
	I1225 19:03:51.061762  301873 start.go:83] releasing machines lock for "auto-910464", held for 6.735694487s
	I1225 19:03:51.061826  301873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-910464
	I1225 19:03:51.086371  301873 ssh_runner.go:195] Run: cat /version.json
	I1225 19:03:51.086433  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:51.086477  301873 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:03:51.086679  301873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-910464
	I1225 19:03:51.111832  301873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa Username:docker}
	I1225 19:03:51.112155  301873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/auto-910464/id_rsa Username:docker}
	I1225 19:03:51.290014  301873 ssh_runner.go:195] Run: systemctl --version
	I1225 19:03:51.300468  301873 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:03:51.349554  301873 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:03:51.355397  301873 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:03:51.355472  301873 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:03:51.386137  301873 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 19:03:51.386156  301873 start.go:496] detecting cgroup driver to use...
	I1225 19:03:51.386187  301873 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:03:51.386229  301873 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:03:51.403859  301873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:03:51.417548  301873 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:03:51.417600  301873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:03:51.436516  301873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:03:51.458674  301873 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:03:51.562094  301873 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:03:51.672291  301873 docker.go:234] disabling docker service ...
	I1225 19:03:51.672358  301873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:03:51.693114  301873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:03:51.707141  301873 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:03:51.802519  301873 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:03:51.911202  301873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:03:51.924078  301873 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:03:51.939079  301873 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:03:51.939134  301873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:51.950784  301873 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:03:51.950842  301873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:51.959935  301873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:51.968211  301873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:51.977877  301873 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:03:51.986426  301873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:51.995545  301873 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:52.009150  301873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:03:52.017759  301873 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:03:52.025880  301873 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:03:52.034574  301873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:52.126250  301873 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:03:52.270719  301873 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:03:52.270799  301873 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:03:52.274935  301873 start.go:574] Will wait 60s for crictl version
	I1225 19:03:52.274985  301873 ssh_runner.go:195] Run: which crictl
	I1225 19:03:52.278584  301873 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:03:52.305842  301873 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:03:52.305934  301873 ssh_runner.go:195] Run: crio --version
	I1225 19:03:52.337715  301873 ssh_runner.go:195] Run: crio --version
	I1225 19:03:52.372434  301873 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1225 19:03:47.612121  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:47.612609  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:47.612666  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:47.612726  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:47.641657  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:47.641681  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:47.641687  260034 cri.go:96] found id: ""
	I1225 19:03:47.641696  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:47.641755  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.645853  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.649686  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:47.649756  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:47.677150  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:47.677177  260034 cri.go:96] found id: ""
	I1225 19:03:47.677187  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:47.677248  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.681658  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:47.681729  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:47.711403  260034 cri.go:96] found id: ""
	I1225 19:03:47.711429  260034 logs.go:282] 0 containers: []
	W1225 19:03:47.711440  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:47.711447  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:47.711502  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:47.737708  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:47.737735  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:47.737741  260034 cri.go:96] found id: ""
	I1225 19:03:47.737751  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:47.737808  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.741695  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.745200  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:47.745264  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:47.771648  260034 cri.go:96] found id: ""
	I1225 19:03:47.771673  260034 logs.go:282] 0 containers: []
	W1225 19:03:47.771681  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:47.771687  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:47.771740  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:47.798527  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:47.798552  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:47.798556  260034 cri.go:96] found id: ""
	I1225 19:03:47.798563  260034 logs.go:282] 2 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:47.798613  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.802558  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:47.806349  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:47.806411  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:47.833490  260034 cri.go:96] found id: ""
	I1225 19:03:47.833519  260034 logs.go:282] 0 containers: []
	W1225 19:03:47.833531  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:47.833537  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:47.833592  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:47.860911  260034 cri.go:96] found id: ""
	I1225 19:03:47.860942  260034 logs.go:282] 0 containers: []
	W1225 19:03:47.860954  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:47.860970  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:47.860984  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:47.892836  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:47.892871  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:47.977511  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:47.977544  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:47.990919  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:47.990942  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:48.047586  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:48.047612  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:48.047633  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:48.079019  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:48.079047  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:48.107523  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:48.107555  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:48.134856  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:48.134881  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:48.162731  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:48.162754  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:48.216500  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:48.216557  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:48.255055  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:48.255088  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:48.289444  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:03:48.289478  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:50.817960  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:03:50.818359  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:03:50.818422  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:03:50.818465  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:03:50.846030  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:50.846057  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:50.846063  260034 cri.go:96] found id: ""
	I1225 19:03:50.846075  260034 logs.go:282] 2 containers: [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:03:50.846142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:50.850398  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:50.854108  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:03:50.854171  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:03:50.885696  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:50.885717  260034 cri.go:96] found id: ""
	I1225 19:03:50.885727  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:03:50.885845  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:50.890680  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:03:50.890759  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:03:50.926641  260034 cri.go:96] found id: ""
	I1225 19:03:50.926669  260034 logs.go:282] 0 containers: []
	W1225 19:03:50.926680  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:03:50.926687  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:03:50.926782  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:03:50.964273  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:50.964301  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:50.964309  260034 cri.go:96] found id: ""
	I1225 19:03:50.964319  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:03:50.964384  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:50.970415  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:50.975287  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:03:50.975357  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:03:51.008666  260034 cri.go:96] found id: ""
	I1225 19:03:51.008686  260034 logs.go:282] 0 containers: []
	W1225 19:03:51.008694  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:03:51.008699  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:03:51.008751  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:03:51.045563  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:51.045589  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:51.045598  260034 cri.go:96] found id: ""
	I1225 19:03:51.045607  260034 logs.go:282] 2 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:03:51.045663  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:51.050303  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:03:51.054650  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:03:51.054716  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:03:51.091290  260034 cri.go:96] found id: ""
	I1225 19:03:51.091315  260034 logs.go:282] 0 containers: []
	W1225 19:03:51.091326  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:03:51.091333  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:03:51.091392  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:03:51.132526  260034 cri.go:96] found id: ""
	I1225 19:03:51.132547  260034 logs.go:282] 0 containers: []
	W1225 19:03:51.132556  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:03:51.132563  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:03:51.132574  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:03:51.149692  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:03:51.149717  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:03:51.232059  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:03:51.232085  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:03:51.232100  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:03:51.281803  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:03:51.281842  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:03:51.319826  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:03:51.319982  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:03:51.355434  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:03:51.355461  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:03:51.416299  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:03:51.416333  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:03:51.453778  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:03:51.453817  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:03:51.568621  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:03:51.568658  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:03:51.610912  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:03:51.611004  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:03:51.657530  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:03:51.657567  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:03:51.685218  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:03:51.685251  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:03:50.887506  296906 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 19:03:50.892664  296906 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1225 19:03:50.892683  296906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1225 19:03:50.906699  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 19:03:51.190998  296906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 19:03:51.191082  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:51.191114  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-731832 minikube.k8s.io/updated_at=2025_12_25T19_03_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef minikube.k8s.io/name=newest-cni-731832 minikube.k8s.io/primary=true
	I1225 19:03:51.295107  296906 ops.go:34] apiserver oom_adj: -16
	I1225 19:03:51.295407  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:51.795370  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:52.295609  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:52.373968  301873 cli_runner.go:164] Run: docker network inspect auto-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:03:52.393042  301873 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1225 19:03:52.397461  301873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:03:52.409586  301873 kubeadm.go:884] updating cluster {Name:auto-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:03:52.409683  301873 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:03:52.409726  301873 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:03:52.444499  301873 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:03:52.444524  301873 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:03:52.444634  301873 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:03:52.473035  301873 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:03:52.473056  301873 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:03:52.473065  301873 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1225 19:03:52.473174  301873 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-910464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:03:52.473255  301873 ssh_runner.go:195] Run: crio config
	I1225 19:03:52.528865  301873 cni.go:84] Creating CNI manager for ""
	I1225 19:03:52.528885  301873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:03:52.528916  301873 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:03:52.528945  301873 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-910464 NodeName:auto-910464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:03:52.529069  301873 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-910464"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:03:52.529121  301873 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:03:52.539544  301873 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:03:52.539609  301873 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:03:52.549740  301873 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1225 19:03:52.562993  301873 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:03:52.580095  301873 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1225 19:03:52.594966  301873 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:03:52.599186  301873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:03:52.611254  301873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:52.693768  301873 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:03:52.723551  301873 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464 for IP: 192.168.76.2
	I1225 19:03:52.723573  301873 certs.go:195] generating shared ca certs ...
	I1225 19:03:52.723589  301873 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:52.723746  301873 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:03:52.723791  301873 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:03:52.723801  301873 certs.go:257] generating profile certs ...
	I1225 19:03:52.723857  301873 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/client.key
	I1225 19:03:52.723875  301873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/client.crt with IP's: []
	I1225 19:03:52.979219  301873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/client.crt ...
	I1225 19:03:52.979246  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/client.crt: {Name:mkaa4b2e9621f9705f62b4966053590b2c8b947f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:52.979404  301873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/client.key ...
	I1225 19:03:52.979419  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/client.key: {Name:mkfd225c3db9c52cd26efda6b80f91818fd329a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:52.979539  301873 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.key.ea6b6d5e
	I1225 19:03:52.979558  301873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.crt.ea6b6d5e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1225 19:03:53.097999  301873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.crt.ea6b6d5e ...
	I1225 19:03:53.098028  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.crt.ea6b6d5e: {Name:mk618a33bf56ad6838567b3e85a7dea95df845f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:53.098191  301873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.key.ea6b6d5e ...
	I1225 19:03:53.098204  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.key.ea6b6d5e: {Name:mk09189770e218ac1d644ae832a49647ae49dae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:53.098274  301873 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.crt.ea6b6d5e -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.crt
	I1225 19:03:53.098352  301873 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.key.ea6b6d5e -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.key
	I1225 19:03:53.098406  301873 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.key
	I1225 19:03:53.098419  301873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.crt with IP's: []
	I1225 19:03:53.253819  301873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.crt ...
	I1225 19:03:53.253847  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.crt: {Name:mk5cfb72fc79bf7a2a218e765a7f33d697cbe5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:53.254050  301873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.key ...
	I1225 19:03:53.254070  301873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.key: {Name:mk05487f5e2791fc1e8af1317d83164a22bc202d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:53.254256  301873 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:03:53.254293  301873 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:03:53.254304  301873 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:03:53.254330  301873 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:03:53.254354  301873 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:03:53.254376  301873 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:03:53.254415  301873 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:03:53.254984  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:03:53.273452  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:03:53.290643  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:03:53.309001  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:03:53.328321  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1225 19:03:53.346619  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:03:53.366242  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:03:53.383181  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/auto-910464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1225 19:03:53.399949  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:03:53.418522  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:03:53.435184  301873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:03:53.452403  301873 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:03:53.464971  301873 ssh_runner.go:195] Run: openssl version
	I1225 19:03:53.470802  301873 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:53.477716  301873 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:03:53.484784  301873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:53.488395  301873 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:53.488435  301873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:03:53.522875  301873 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:03:53.531534  301873 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 19:03:53.539077  301873 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:03:53.547410  301873 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:03:53.554739  301873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:03:53.558605  301873 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:03:53.558648  301873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:03:53.593097  301873 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:03:53.600878  301873 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9112.pem /etc/ssl/certs/51391683.0
	I1225 19:03:53.608867  301873 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:03:53.616615  301873 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:03:53.624387  301873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:03:53.628773  301873 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:03:53.628829  301873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:03:53.664478  301873 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:03:53.672162  301873 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91122.pem /etc/ssl/certs/3ec20f2e.0
	I1225 19:03:53.679618  301873 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:03:53.683087  301873 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 19:03:53.683141  301873 kubeadm.go:401] StartCluster: {Name:auto-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:03:53.683200  301873 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:03:53.683253  301873 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:03:53.709842  301873 cri.go:96] found id: ""
	I1225 19:03:53.709932  301873 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:03:53.718170  301873 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:03:53.725852  301873 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 19:03:53.725916  301873 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:03:53.733502  301873 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 19:03:53.733518  301873 kubeadm.go:158] found existing configuration files:
	
	I1225 19:03:53.733561  301873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1225 19:03:53.741234  301873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 19:03:53.741304  301873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 19:03:53.748534  301873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1225 19:03:53.755905  301873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 19:03:53.755961  301873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 19:03:53.763150  301873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1225 19:03:53.770301  301873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 19:03:53.770355  301873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:03:53.777323  301873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1225 19:03:53.784373  301873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 19:03:53.784421  301873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:03:53.791571  301873 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 19:03:53.833596  301873 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1225 19:03:53.833673  301873 kubeadm.go:319] [preflight] Running pre-flight checks
	I1225 19:03:53.856349  301873 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1225 19:03:53.856442  301873 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1225 19:03:53.856490  301873 kubeadm.go:319] OS: Linux
	I1225 19:03:53.856574  301873 kubeadm.go:319] CGROUPS_CPU: enabled
	I1225 19:03:53.856665  301873 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1225 19:03:53.856748  301873 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1225 19:03:53.856812  301873 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1225 19:03:53.856873  301873 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1225 19:03:53.856965  301873 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1225 19:03:53.857048  301873 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1225 19:03:53.857104  301873 kubeadm.go:319] CGROUPS_IO: enabled
	I1225 19:03:53.919941  301873 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 19:03:53.920122  301873 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 19:03:53.920240  301873 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1225 19:03:53.928056  301873 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 19:03:53.930328  301873 out.go:252]   - Generating certificates and keys ...
	I1225 19:03:53.930430  301873 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1225 19:03:53.930520  301873 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1225 19:03:52.796257  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:53.296096  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:53.796250  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:54.296117  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:54.795882  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:55.295554  296906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:03:55.364715  296906 kubeadm.go:1114] duration metric: took 4.173704139s to wait for elevateKubeSystemPrivileges
	I1225 19:03:55.364750  296906 kubeadm.go:403] duration metric: took 13.422929949s to StartCluster
	I1225 19:03:55.364768  296906 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:55.364840  296906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:03:55.366338  296906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:03:55.366592  296906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 19:03:55.366618  296906 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:03:55.366681  296906 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:03:55.366784  296906 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-731832"
	I1225 19:03:55.366806  296906 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-731832"
	I1225 19:03:55.366841  296906 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:03:55.366880  296906 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:03:55.366942  296906 addons.go:70] Setting default-storageclass=true in profile "newest-cni-731832"
	I1225 19:03:55.366961  296906 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-731832"
	I1225 19:03:55.367289  296906 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:03:55.367425  296906 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:03:55.369083  296906 out.go:179] * Verifying Kubernetes components...
	I1225 19:03:55.370414  296906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:03:55.390615  296906 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:03:55.391828  296906 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:03:55.391853  296906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:03:55.391831  296906 addons.go:239] Setting addon default-storageclass=true in "newest-cni-731832"
	I1225 19:03:55.391935  296906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:03:55.391969  296906 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:03:55.392422  296906 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:03:55.422506  296906 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:03:55.422585  296906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:03:55.422659  296906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:03:55.423695  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:03:55.444143  296906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:03:55.467046  296906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 19:03:55.509885  296906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:03:55.530625  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:03:55.555097  296906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:03:55.645955  296906 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1225 19:03:55.647111  296906 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:03:55.647183  296906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:03:55.850704  296906 api_server.go:72] duration metric: took 484.045771ms to wait for apiserver process to appear ...
	I1225 19:03:55.850742  296906 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:03:55.850763  296906 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:03:55.856327  296906 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:03:55.857259  296906 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:03:55.857284  296906 api_server.go:131] duration metric: took 6.534445ms to wait for apiserver health ...
	I1225 19:03:55.857294  296906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:03:55.857346  296906 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1225 19:03:55.858753  296906 addons.go:530] duration metric: took 492.073035ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1225 19:03:55.860069  296906 system_pods.go:59] 8 kube-system pods found
	I1225 19:03:55.860110  296906 system_pods.go:61] "coredns-7d764666f9-hsm6h" [650e5fe1-fc5a-4f59-86ae-9bee4f454a6c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1225 19:03:55.860122  296906 system_pods.go:61] "etcd-newest-cni-731832" [5dd7d1d7-ba36-4070-b68a-e45da3f0a4e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:03:55.860140  296906 system_pods.go:61] "kindnet-l587m" [6a88d1e0-b81d-4b51-a2dd-283548deb416] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:03:55.860150  296906 system_pods.go:61] "kube-apiserver-newest-cni-731832" [ec1a8903-a48a-4dd4-a9c9-2b44931f0f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:03:55.860155  296906 system_pods.go:61] "kube-controller-manager-newest-cni-731832" [0f388c1f-3938-4912-8aa7-4cd5c107b62a] Running
	I1225 19:03:55.860170  296906 system_pods.go:61] "kube-proxy-gnqfh" [7a8b403f-215a-402e-80a0-8c070cdc4875] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:03:55.860175  296906 system_pods.go:61] "kube-scheduler-newest-cni-731832" [7fa22a28-98a7-4b81-8660-fa3e637a8d0a] Running
	I1225 19:03:55.860181  296906 system_pods.go:61] "storage-provisioner" [c0825e53-f743-4887-ab64-13e5553dca5f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1225 19:03:55.860189  296906 system_pods.go:74] duration metric: took 2.887684ms to wait for pod list to return data ...
	I1225 19:03:55.860197  296906 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:03:55.862444  296906 default_sa.go:45] found service account: "default"
	I1225 19:03:55.862465  296906 default_sa.go:55] duration metric: took 2.26105ms for default service account to be created ...
	I1225 19:03:55.862476  296906 kubeadm.go:587] duration metric: took 495.823151ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 19:03:55.862499  296906 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:03:55.864829  296906 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:03:55.864862  296906 node_conditions.go:123] node cpu capacity is 8
	I1225 19:03:55.864878  296906 node_conditions.go:105] duration metric: took 2.372926ms to run NodePressure ...
	I1225 19:03:55.864905  296906 start.go:242] waiting for startup goroutines ...
	I1225 19:03:56.151096  296906 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-731832" context rescaled to 1 replicas
	I1225 19:03:56.151134  296906 start.go:247] waiting for cluster config update ...
	I1225 19:03:56.151154  296906 start.go:256] writing updated cluster config ...
	I1225 19:03:56.151400  296906 ssh_runner.go:195] Run: rm -f paused
	I1225 19:03:56.209872  296906 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:03:56.211964  296906 out.go:179] * Done! kubectl is now configured to use "newest-cni-731832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.935805367Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.936660104Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=28943804-3f44-4378-b3a5-cc4cad3bd33a name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.936684158Z" level=info msg="Ran pod sandbox df78eb75a5e149e6d3c608019fb5de066032cf4313c3a61c0cace638eef0c44e with infra container: kube-system/kindnet-l587m/POD" id=4c06c942-0b61-4337-98d4-e7bcffaf5ee2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.937649321Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2238cd7c-c2a4-477c-abff-0a5132180762 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.937792296Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=2238cd7c-c2a4-477c-abff-0a5132180762 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.937850573Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=2238cd7c-c2a4-477c-abff-0a5132180762 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.939438321Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=17349dd1-430c-4dab-b410-8335d4a34199 name=/runtime.v1.ImageService/PullImage
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.940124674Z" level=info msg="Creating container: kube-system/kube-proxy-gnqfh/kube-proxy" id=7b1f8a42-f770-4989-ad06-e892954307a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.940236554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.94250023Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.948547325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.949116179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.993128726Z" level=info msg="Created container 97cfdce66783da4264fba915969a0bfff73228484c848612f1c2fd0b63697724: kube-system/kube-proxy-gnqfh/kube-proxy" id=7b1f8a42-f770-4989-ad06-e892954307a4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.994019059Z" level=info msg="Starting container: 97cfdce66783da4264fba915969a0bfff73228484c848612f1c2fd0b63697724" id=930682d3-eda0-467b-9e5d-346e2132fa44 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:55 newest-cni-731832 crio[772]: time="2025-12-25T19:03:55.996708556Z" level=info msg="Started container" PID=1567 containerID=97cfdce66783da4264fba915969a0bfff73228484c848612f1c2fd0b63697724 description=kube-system/kube-proxy-gnqfh/kube-proxy id=930682d3-eda0-467b-9e5d-346e2132fa44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a71850245e78b0a08d12344791d900f9c06784bc7ea652bcf7b444de4e8af333
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.229519332Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=17349dd1-430c-4dab-b410-8335d4a34199 name=/runtime.v1.ImageService/PullImage
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.230351137Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=cb3dcee3-2ff4-494b-9f2d-216aa3220a29 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.232553037Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=f79a5404-953d-47f1-81de-a3f73dab82b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.235992496Z" level=info msg="Creating container: kube-system/kindnet-l587m/kindnet-cni" id=89b1659f-911e-4bde-bb33-4205914ac7ca name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.236080131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.239567915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.23996993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.264193689Z" level=info msg="Created container 9e252f1c4de6bd2160c257666315b26b9b065ebf95418ac2a689f057db3e57a9: kube-system/kindnet-l587m/kindnet-cni" id=89b1659f-911e-4bde-bb33-4205914ac7ca name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.264814291Z" level=info msg="Starting container: 9e252f1c4de6bd2160c257666315b26b9b065ebf95418ac2a689f057db3e57a9" id=af5c126b-d669-4448-8795-d8a004214ea9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:03:57 newest-cni-731832 crio[772]: time="2025-12-25T19:03:57.266758055Z" level=info msg="Started container" PID=1822 containerID=9e252f1c4de6bd2160c257666315b26b9b065ebf95418ac2a689f057db3e57a9 description=kube-system/kindnet-l587m/kindnet-cni id=af5c126b-d669-4448-8795-d8a004214ea9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df78eb75a5e149e6d3c608019fb5de066032cf4313c3a61c0cace638eef0c44e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9e252f1c4de6b       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   df78eb75a5e14       kindnet-l587m                               kube-system
	97cfdce66783d       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                     1 second ago             Running             kube-proxy                0                   a71850245e78b       kube-proxy-gnqfh                            kube-system
	cf12862987327       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                     13 seconds ago           Running             kube-scheduler            0                   be99a1e65aa39       kube-scheduler-newest-cni-731832            kube-system
	55da54ca61bec       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     13 seconds ago           Running             etcd                      0                   3195ff428209a       etcd-newest-cni-731832                      kube-system
	28d2f7fadd6af       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                     13 seconds ago           Running             kube-controller-manager   0                   282dea5da7b0f       kube-controller-manager-newest-cni-731832   kube-system
	f97bcc6b8e054       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                     13 seconds ago           Running             kube-apiserver            0                   0f4852a7ad0b8       kube-apiserver-newest-cni-731832            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-731832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-731832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=newest-cni-731832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_03_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:03:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-731832
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:03:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 25 Dec 2025 19:03:50 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-731832
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                5b8d2f7a-018b-4c55-9c9b-3d6cf6b9276f
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-731832                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-l587m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-731832             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-controller-manager-newest-cni-731832    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-gnqfh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-731832             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-731832 event: Registered Node newest-cni-731832 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [55da54ca61bec0aae7853d72d628803dabb9e75cd7f7236060942f448652d5f7] <==
	{"level":"warn","ts":"2025-12-25T19:03:47.230719Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.977382ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T19:03:47.230826Z","caller":"traceutil/trace.go:172","msg":"trace[1712471583] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:178; }","duration":"167.121034ms","start":"2025-12-25T19:03:47.063686Z","end":"2025-12-25T19:03:47.230807Z","steps":["trace[1712471583] 'agreement among raft nodes before linearized reading'  (duration: 83.254017ms)","trace[1712471583] 'range keys from in-memory index tree'  (duration: 83.687094ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T19:03:47.230929Z","caller":"traceutil/trace.go:172","msg":"trace[1260960040] transaction","detail":"{read_only:false; response_revision:179; number_of_response:1; }","duration":"175.023643ms","start":"2025-12-25T19:03:47.055840Z","end":"2025-12-25T19:03:47.230864Z","steps":["trace[1260960040] 'process raft request'  (duration: 91.122419ms)","trace[1260960040] 'compare'  (duration: 83.667832ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T19:03:47.537406Z","caller":"traceutil/trace.go:172","msg":"trace[991037406] transaction","detail":"{read_only:false; response_revision:182; number_of_response:1; }","duration":"181.13806ms","start":"2025-12-25T19:03:47.356247Z","end":"2025-12-25T19:03:47.537386Z","steps":["trace[991037406] 'process raft request'  (duration: 125.522622ms)","trace[991037406] 'compare'  (duration: 55.420833ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T19:03:47.685612Z","caller":"traceutil/trace.go:172","msg":"trace[1871417748] transaction","detail":"{read_only:false; response_revision:185; number_of_response:1; }","duration":"106.837827ms","start":"2025-12-25T19:03:47.578748Z","end":"2025-12-25T19:03:47.685586Z","steps":["trace[1871417748] 'process raft request'  (duration: 85.434038ms)","trace[1871417748] 'compare'  (duration: 21.268539ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T19:03:48.089177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"323.538631ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T19:03:48.089260Z","caller":"traceutil/trace.go:172","msg":"trace[444423122] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:186; }","duration":"323.62889ms","start":"2025-12-25T19:03:47.765610Z","end":"2025-12-25T19:03:48.089239Z","steps":["trace[444423122] 'agreement among raft nodes before linearized reading'  (duration: 71.016023ms)","trace[444423122] 'range keys from in-memory index tree'  (duration: 252.483491ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T19:03:48.089299Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-25T19:03:47.765580Z","time spent":"323.709948ms","remote":"127.0.0.1:44718","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-12-25T19:03:48.089599Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.566866ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597968078679798 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:selinux-warning-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:selinux-warning-controller\" value_size:682 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-25T19:03:48.089680Z","caller":"traceutil/trace.go:172","msg":"trace[247405633] transaction","detail":"{read_only:false; response_revision:187; number_of_response:1; }","duration":"391.990552ms","start":"2025-12-25T19:03:47.697677Z","end":"2025-12-25T19:03:48.089667Z","steps":["trace[247405633] 'process raft request'  (duration: 138.975482ms)","trace[247405633] 'compare'  (duration: 252.431884ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T19:03:48.089734Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-25T19:03:47.697663Z","time spent":"392.046278ms","remote":"127.0.0.1:45380","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:selinux-warning-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:selinux-warning-controller\" value_size:682 >> failure:<>"}
	{"level":"warn","ts":"2025-12-25T19:03:48.345817Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.986843ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597968078679803 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/roles/kube-system/extension-apiserver-authentication-reader\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-system/extension-apiserver-authentication-reader\" value_size:579 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-25T19:03:48.345958Z","caller":"traceutil/trace.go:172","msg":"trace[2004804970] transaction","detail":"{read_only:false; response_revision:188; number_of_response:1; }","duration":"249.867557ms","start":"2025-12-25T19:03:48.096075Z","end":"2025-12-25T19:03:48.345943Z","steps":["trace[2004804970] 'process raft request'  (duration: 122.703923ms)","trace[2004804970] 'compare'  (duration: 126.876342ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T19:03:48.410466Z","caller":"traceutil/trace.go:172","msg":"trace[256067201] linearizableReadLoop","detail":"{readStateIndex:192; appliedIndex:192; }","duration":"134.588857ms","start":"2025-12-25T19:03:48.275827Z","end":"2025-12-25T19:03:48.410416Z","steps":["trace[256067201] 'read index received'  (duration: 134.577184ms)","trace[256067201] 'applied index is now lower than readState.Index'  (duration: 7.522µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T19:03:48.410597Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.757575ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T19:03:48.410629Z","caller":"traceutil/trace.go:172","msg":"trace[698251056] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:188; }","duration":"134.800931ms","start":"2025-12-25T19:03:48.275816Z","end":"2025-12-25T19:03:48.410617Z","steps":["trace[698251056] 'agreement among raft nodes before linearized reading'  (duration: 134.709816ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-25T19:03:48.410735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.880858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T19:03:48.410764Z","caller":"traceutil/trace.go:172","msg":"trace[363458251] range","detail":"{range_begin:/registry/csidrivers; range_end:; response_count:0; response_revision:189; }","duration":"100.927966ms","start":"2025-12-25T19:03:48.309829Z","end":"2025-12-25T19:03:48.410757Z","steps":["trace[363458251] 'agreement among raft nodes before linearized reading'  (duration: 100.850189ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T19:03:48.410710Z","caller":"traceutil/trace.go:172","msg":"trace[734309934] transaction","detail":"{read_only:false; response_revision:189; number_of_response:1; }","duration":"154.024843ms","start":"2025-12-25T19:03:48.256667Z","end":"2025-12-25T19:03:48.410692Z","steps":["trace[734309934] 'process raft request'  (duration: 153.810274ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-25T19:03:48.550349Z","caller":"traceutil/trace.go:172","msg":"trace[2002458811] transaction","detail":"{read_only:false; response_revision:190; number_of_response:1; }","duration":"135.453132ms","start":"2025-12-25T19:03:48.414862Z","end":"2025-12-25T19:03:48.550315Z","steps":["trace[2002458811] 'process raft request'  (duration: 113.596202ms)","trace[2002458811] 'compare'  (duration: 21.641917ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T19:03:48.916190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.073274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T19:03:48.916259Z","caller":"traceutil/trace.go:172","msg":"trace[1628508564] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:192; }","duration":"150.157654ms","start":"2025-12-25T19:03:48.766087Z","end":"2025-12-25T19:03:48.916244Z","steps":["trace[1628508564] 'agreement among raft nodes before linearized reading'  (duration: 93.218532ms)","trace[1628508564] 'range keys from in-memory index tree'  (duration: 56.817782ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-25T19:03:48.916361Z","caller":"traceutil/trace.go:172","msg":"trace[1915988858] transaction","detail":"{read_only:false; response_revision:193; number_of_response:1; }","duration":"178.225314ms","start":"2025-12-25T19:03:48.738113Z","end":"2025-12-25T19:03:48.916338Z","steps":["trace[1915988858] 'process raft request'  (duration: 121.202423ms)","trace[1915988858] 'compare'  (duration: 56.843521ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-25T19:03:48.916463Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.380042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-25T19:03:48.916532Z","caller":"traceutil/trace.go:172","msg":"trace[353370780] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:193; }","duration":"128.438518ms","start":"2025-12-25T19:03:48.788064Z","end":"2025-12-25T19:03:48.916502Z","steps":["trace[353370780] 'agreement among raft nodes before linearized reading'  (duration: 128.341859ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:03:57 up 46 min,  0 user,  load average: 2.95, 2.55, 1.84
	Linux newest-cni-731832 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9e252f1c4de6bd2160c257666315b26b9b065ebf95418ac2a689f057db3e57a9] <==
	I1225 19:03:57.376505       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:03:57.376755       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1225 19:03:57.376877       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:03:57.468340       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:03:57.468389       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:03:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	
	
	==> kube-apiserver [f97bcc6b8e054d7add30a2521f19b9e4471bac03f5b76386e09ba2427d0e9612] <==
	I1225 19:03:45.663587       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1225 19:03:45.664546       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1225 19:03:45.665069       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1225 19:03:45.665339       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:03:45.672144       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1225 19:03:45.677363       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:45.679146       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:45.864241       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:03:46.568227       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1225 19:03:46.571934       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1225 19:03:46.571950       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1225 19:03:48.095627       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:03:48.951084       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:03:49.071989       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1225 19:03:49.078063       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1225 19:03:49.079175       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:03:49.083551       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:03:49.594924       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:03:50.283865       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:03:50.292559       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1225 19:03:50.300166       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1225 19:03:55.052005       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:55.055974       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:03:55.497891       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:03:55.597178       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [28d2f7fadd6afbf8df3f61512b504ada47b03b8c22ed90b203f003c3eb91ea07] <==
	I1225 19:03:54.403754       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.403739       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404505       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404530       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404537       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.403757       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404546       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404553       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404571       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404466       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.405043       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.405134       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404522       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404564       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.403720       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.404514       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.403716       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.405046       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.408821       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.415276       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:03:54.417851       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-731832" podCIDRs=["10.42.0.0/24"]
	I1225 19:03:54.505058       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:54.505081       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1225 19:03:54.505088       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1225 19:03:54.516408       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [97cfdce66783da4264fba915969a0bfff73228484c848612f1c2fd0b63697724] <==
	I1225 19:03:56.034483       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:03:56.102833       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:03:56.203013       1 shared_informer.go:377] "Caches are synced"
	I1225 19:03:56.203065       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1225 19:03:56.203208       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:03:56.224807       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:03:56.224869       1 server_linux.go:136] "Using iptables Proxier"
	I1225 19:03:56.230361       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:03:56.230870       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1225 19:03:56.230953       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:03:56.232413       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:03:56.232435       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:03:56.232449       1 config.go:200] "Starting service config controller"
	I1225 19:03:56.232456       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:03:56.232478       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:03:56.232483       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:03:56.232541       1 config.go:309] "Starting node config controller"
	I1225 19:03:56.232551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:03:56.333181       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 19:03:56.333192       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:03:56.333191       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:03:56.333232       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cf1286298732717e743ff590ad7d7f0a2dcdd4e40cf54ca296300603a89a8dce] <==
	E1225 19:03:45.627342       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1225 19:03:45.627619       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1225 19:03:45.627658       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1225 19:03:45.627658       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1225 19:03:45.628261       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1225 19:03:45.628611       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1225 19:03:45.628641       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1225 19:03:45.628697       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1225 19:03:45.628715       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1225 19:03:45.628787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1225 19:03:45.628794       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1225 19:03:46.479059       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1225 19:03:46.527154       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1225 19:03:46.577579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1225 19:03:46.610103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1225 19:03:46.704928       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1225 19:03:46.718213       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1225 19:03:46.733770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1225 19:03:46.787830       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1225 19:03:46.788162       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1225 19:03:46.821015       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1225 19:03:46.821100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1225 19:03:46.896575       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1225 19:03:46.977378       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1225 19:03:49.621674       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: E1225 19:03:51.175228    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-731832" containerName="kube-apiserver"
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: E1225 19:03:51.176615    1290 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-731832\" already exists" pod="kube-system/kube-controller-manager-newest-cni-731832"
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: E1225 19:03:51.176693    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-731832" containerName="kube-controller-manager"
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: I1225 19:03:51.205152    1290 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-731832" podStartSLOduration=5.205130654 podStartE2EDuration="5.205130654s" podCreationTimestamp="2025-12-25 19:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:51.193085091 +0000 UTC m=+1.138661909" watchObservedRunningTime="2025-12-25 19:03:51.205130654 +0000 UTC m=+1.150707483"
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: I1225 19:03:51.205405    1290 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-731832" podStartSLOduration=1.205396792 podStartE2EDuration="1.205396792s" podCreationTimestamp="2025-12-25 19:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:51.204483803 +0000 UTC m=+1.150060621" watchObservedRunningTime="2025-12-25 19:03:51.205396792 +0000 UTC m=+1.150973626"
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: I1225 19:03:51.230056    1290 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-731832" podStartSLOduration=1.230035635 podStartE2EDuration="1.230035635s" podCreationTimestamp="2025-12-25 19:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:51.230018745 +0000 UTC m=+1.175595562" watchObservedRunningTime="2025-12-25 19:03:51.230035635 +0000 UTC m=+1.175612454"
	Dec 25 19:03:51 newest-cni-731832 kubelet[1290]: I1225 19:03:51.230169    1290 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-731832" podStartSLOduration=3.230161561 podStartE2EDuration="3.230161561s" podCreationTimestamp="2025-12-25 19:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:51.216960049 +0000 UTC m=+1.162536867" watchObservedRunningTime="2025-12-25 19:03:51.230161561 +0000 UTC m=+1.175738380"
	Dec 25 19:03:52 newest-cni-731832 kubelet[1290]: E1225 19:03:52.166399    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-731832" containerName="kube-controller-manager"
	Dec 25 19:03:52 newest-cni-731832 kubelet[1290]: E1225 19:03:52.166526    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-731832" containerName="kube-scheduler"
	Dec 25 19:03:52 newest-cni-731832 kubelet[1290]: E1225 19:03:52.166632    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-731832" containerName="kube-apiserver"
	Dec 25 19:03:52 newest-cni-731832 kubelet[1290]: E1225 19:03:52.166744    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-731832" containerName="etcd"
	Dec 25 19:03:53 newest-cni-731832 kubelet[1290]: E1225 19:03:53.170135    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-731832" containerName="etcd"
	Dec 25 19:03:53 newest-cni-731832 kubelet[1290]: E1225 19:03:53.170304    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-731832" containerName="kube-apiserver"
	Dec 25 19:03:54 newest-cni-731832 kubelet[1290]: I1225 19:03:54.475547    1290 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 25 19:03:54 newest-cni-731832 kubelet[1290]: I1225 19:03:54.476345    1290 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.668415    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-lib-modules\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669073    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-xtables-lock\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669161    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a8b403f-215a-402e-80a0-8c070cdc4875-kube-proxy\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669237    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a8b403f-215a-402e-80a0-8c070cdc4875-xtables-lock\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669300    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-cni-cfg\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669328    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq8k\" (UniqueName: \"kubernetes.io/projected/6a88d1e0-b81d-4b51-a2dd-283548deb416-kube-api-access-trq8k\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669388    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a8b403f-215a-402e-80a0-8c070cdc4875-lib-modules\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:03:55 newest-cni-731832 kubelet[1290]: I1225 19:03:55.669452    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfvhb\" (UniqueName: \"kubernetes.io/projected/7a8b403f-215a-402e-80a0-8c070cdc4875-kube-api-access-gfvhb\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:03:56 newest-cni-731832 kubelet[1290]: E1225 19:03:56.422460    1290 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-731832" containerName="kube-scheduler"
	Dec 25 19:03:56 newest-cni-731832 kubelet[1290]: I1225 19:03:56.434450    1290 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gnqfh" podStartSLOduration=1.434430552 podStartE2EDuration="1.434430552s" podCreationTimestamp="2025-12-25 19:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-25 19:03:56.192035075 +0000 UTC m=+6.137611890" watchObservedRunningTime="2025-12-25 19:03:56.434430552 +0000 UTC m=+6.380007370"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-731832 -n newest-cni-731832
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-731832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-hsm6h storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner: exit status 1 (60.904993ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-hsm6h" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-731832 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-731832 --alsologtostderr -v=1: exit status 80 (2.539699689s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-731832 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 19:04:17.828439  312840 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:04:17.828712  312840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:17.828724  312840 out.go:374] Setting ErrFile to fd 2...
	I1225 19:04:17.828732  312840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:17.828963  312840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:04:17.829295  312840 out.go:368] Setting JSON to false
	I1225 19:04:17.829319  312840 mustload.go:66] Loading cluster: newest-cni-731832
	I1225 19:04:17.829719  312840 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:17.830139  312840 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:17.848165  312840 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:04:17.848450  312840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:17.911781  312840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-25 19:04:17.900306529 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:17.912584  312840 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766570787-22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766570787-22316-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-731832 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1225 19:04:17.914647  312840 out.go:179] * Pausing node newest-cni-731832 ... 
	I1225 19:04:17.915759  312840 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:04:17.916031  312840 ssh_runner.go:195] Run: systemctl --version
	I1225 19:04:17.916072  312840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:17.936385  312840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:18.031931  312840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:04:18.044153  312840 pause.go:52] kubelet running: true
	I1225 19:04:18.044211  312840 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:04:18.188496  312840 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:04:18.188582  312840 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:04:18.259736  312840 cri.go:96] found id: "32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596"
	I1225 19:04:18.259764  312840 cri.go:96] found id: "30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6"
	I1225 19:04:18.259770  312840 cri.go:96] found id: "e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4"
	I1225 19:04:18.259775  312840 cri.go:96] found id: "75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac"
	I1225 19:04:18.259780  312840 cri.go:96] found id: "f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad"
	I1225 19:04:18.259785  312840 cri.go:96] found id: "7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446"
	I1225 19:04:18.259789  312840 cri.go:96] found id: ""
	I1225 19:04:18.259838  312840 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:04:18.271725  312840 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:18Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:04:18.620099  312840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:04:18.636472  312840 pause.go:52] kubelet running: false
	I1225 19:04:18.636529  312840 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:04:18.832250  312840 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:04:18.832342  312840 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:04:18.958181  312840 cri.go:96] found id: "32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596"
	I1225 19:04:18.958215  312840 cri.go:96] found id: "30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6"
	I1225 19:04:18.958222  312840 cri.go:96] found id: "e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4"
	I1225 19:04:18.958228  312840 cri.go:96] found id: "75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac"
	I1225 19:04:18.958233  312840 cri.go:96] found id: "f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad"
	I1225 19:04:18.958238  312840 cri.go:96] found id: "7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446"
	I1225 19:04:18.958242  312840 cri.go:96] found id: ""
	I1225 19:04:18.958306  312840 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:04:19.164776  312840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:04:19.178530  312840 pause.go:52] kubelet running: false
	I1225 19:04:19.178593  312840 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:04:19.335173  312840 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:04:19.335270  312840 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:04:19.423722  312840 cri.go:96] found id: "32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596"
	I1225 19:04:19.423746  312840 cri.go:96] found id: "30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6"
	I1225 19:04:19.423752  312840 cri.go:96] found id: "e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4"
	I1225 19:04:19.423758  312840 cri.go:96] found id: "75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac"
	I1225 19:04:19.423762  312840 cri.go:96] found id: "f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad"
	I1225 19:04:19.423767  312840 cri.go:96] found id: "7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446"
	I1225 19:04:19.423772  312840 cri.go:96] found id: ""
	I1225 19:04:19.423817  312840 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:04:20.031965  312840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:04:20.046101  312840 pause.go:52] kubelet running: false
	I1225 19:04:20.046152  312840 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:04:20.176032  312840 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:04:20.176112  312840 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:04:20.269887  312840 cri.go:96] found id: "32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596"
	I1225 19:04:20.269953  312840 cri.go:96] found id: "30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6"
	I1225 19:04:20.269963  312840 cri.go:96] found id: "e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4"
	I1225 19:04:20.269969  312840 cri.go:96] found id: "75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac"
	I1225 19:04:20.269984  312840 cri.go:96] found id: "f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad"
	I1225 19:04:20.269989  312840 cri.go:96] found id: "7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446"
	I1225 19:04:20.270015  312840 cri.go:96] found id: ""
	I1225 19:04:20.270062  312840 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:04:20.290392  312840 out.go:203] 
	W1225 19:04:20.291630  312840 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 19:04:20.291649  312840 out.go:285] * 
	* 
	W1225 19:04:20.296065  312840 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 19:04:20.299926  312840 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-731832 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-731832
helpers_test.go:244: (dbg) docker inspect newest-cni-731832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81",
	        "Created": "2025-12-25T19:03:37.514242235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309027,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:04:07.094478298Z",
	            "FinishedAt": "2025-12-25T19:04:06.156184287Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/hosts",
	        "LogPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81-json.log",
	        "Name": "/newest-cni-731832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-731832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-731832",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81",
	                "LowerDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-731832",
	                "Source": "/var/lib/docker/volumes/newest-cni-731832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-731832",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-731832",
	                "name.minikube.sigs.k8s.io": "newest-cni-731832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "137e7e11c1af0c255dc0bba4c9516b4e31185bd3b67b32c2456c89d52efc61f8",
	            "SandboxKey": "/var/run/docker/netns/137e7e11c1af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-731832": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "360ef2d655feed4b5ef1f2b45737dda354b50d02cd936b222228be43a9a6ef2b",
	                    "EndpointID": "8b4726365c05d8bfa7fb609f5719653f2e5ca5c46e531d275990249ae5c87ff2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "72:60:42:6f:d4:ea",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-731832",
	                        "0d7dffda1d2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832: exit status 2 (417.192904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-731832 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-731832 logs -n 25: (1.075223807s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                                    │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                         │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ embed-certs-684693 image list --format=json                                                                                                                                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p embed-certs-684693 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p auto-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-960022 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:04 UTC │
	│ addons  │ enable metrics-server -p newest-cni-731832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ stop    │ -p newest-cni-731832 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-731832 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-960022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ image   │ newest-cni-731832 image list --format=json                                                                                                                                                                                                         │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ pause   │ -p newest-cni-731832 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:04:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:04:11.384704  310133 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:04:11.384841  310133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:11.384852  310133 out.go:374] Setting ErrFile to fd 2...
	I1225 19:04:11.384859  310133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:11.385184  310133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:04:11.385734  310133 out.go:368] Setting JSON to false
	I1225 19:04:11.386982  310133 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2799,"bootTime":1766686652,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:04:11.387047  310133 start.go:143] virtualization: kvm guest
	I1225 19:04:11.389145  310133 out.go:179] * [default-k8s-diff-port-960022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:04:11.391235  310133 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:04:11.391228  310133 notify.go:221] Checking for updates...
	I1225 19:04:11.394351  310133 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:04:11.396029  310133 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:11.397662  310133 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:04:11.399180  310133 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:04:11.400803  310133 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:04:11.403098  310133 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:11.403833  310133 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:04:11.429851  310133 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:04:11.429947  310133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:11.487527  310133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 19:04:11.476936748 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:11.487639  310133 docker.go:319] overlay module found
	I1225 19:04:11.490226  310133 out.go:179] * Using the docker driver based on existing profile
	I1225 19:04:11.491362  310133 start.go:309] selected driver: docker
	I1225 19:04:11.491378  310133 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:11.491474  310133 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:04:11.492179  310133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:11.545943  310133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 19:04:11.536358471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:11.546224  310133 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:04:11.546256  310133 cni.go:84] Creating CNI manager for ""
	I1225 19:04:11.546303  310133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:04:11.546336  310133 start.go:353] cluster config:
	{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:11.548400  310133 out.go:179] * Starting "default-k8s-diff-port-960022" primary control-plane node in "default-k8s-diff-port-960022" cluster
	I1225 19:04:11.549704  310133 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:04:11.550989  310133 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:04:11.552134  310133 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:11.552173  310133 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:04:11.552184  310133 cache.go:65] Caching tarball of preloaded images
	I1225 19:04:11.552256  310133 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:04:11.552257  310133 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:04:11.552266  310133 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:04:11.552424  310133 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:04:11.575323  310133 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:04:11.575353  310133 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:04:11.575370  310133 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:04:11.575405  310133 start.go:360] acquireMachinesLock for default-k8s-diff-port-960022: {Name:mk439ca411b17a34361cdf557c6ddd774780f327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:04:11.575480  310133 start.go:364] duration metric: took 40.957µs to acquireMachinesLock for "default-k8s-diff-port-960022"
	I1225 19:04:11.575501  310133 start.go:96] Skipping create...Using existing machine configuration
	I1225 19:04:11.575508  310133 fix.go:54] fixHost starting: 
	I1225 19:04:11.575810  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:11.595262  310133 fix.go:112] recreateIfNeeded on default-k8s-diff-port-960022: state=Stopped err=<nil>
	W1225 19:04:11.595310  310133 fix.go:138] unexpected machine state, will restart: <nil>
	I1225 19:04:07.067071  308802 out.go:252] * Restarting existing docker container for "newest-cni-731832" ...
	I1225 19:04:07.067149  308802 cli_runner.go:164] Run: docker start newest-cni-731832
	I1225 19:04:07.313050  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:07.331810  308802 kic.go:430] container "newest-cni-731832" state is running.
	I1225 19:04:07.332186  308802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-731832
	I1225 19:04:07.352635  308802 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/config.json ...
	I1225 19:04:07.352835  308802 machine.go:94] provisionDockerMachine start ...
	I1225 19:04:07.352994  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:07.372105  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:07.372327  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:07.372339  308802 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:04:07.373023  308802 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45246->127.0.0.1:33103: read: connection reset by peer
	I1225 19:04:10.497866  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-731832
	
	I1225 19:04:10.497918  308802 ubuntu.go:182] provisioning hostname "newest-cni-731832"
	I1225 19:04:10.497994  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:10.515077  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:10.515352  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:10.515371  308802 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-731832 && echo "newest-cni-731832" | sudo tee /etc/hostname
	I1225 19:04:10.649767  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-731832
	
	I1225 19:04:10.649841  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:10.669578  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:10.669786  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:10.669803  308802 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-731832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-731832/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-731832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:04:10.792176  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:04:10.792208  308802 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:04:10.792253  308802 ubuntu.go:190] setting up certificates
	I1225 19:04:10.792265  308802 provision.go:84] configureAuth start
	I1225 19:04:10.792313  308802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-731832
	I1225 19:04:10.811145  308802 provision.go:143] copyHostCerts
	I1225 19:04:10.811234  308802 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:04:10.811251  308802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:04:10.811325  308802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:04:10.811425  308802 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:04:10.811433  308802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:04:10.811459  308802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:04:10.811558  308802 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:04:10.811575  308802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:04:10.811603  308802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:04:10.811678  308802 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.newest-cni-731832 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-731832]
	I1225 19:04:10.917517  308802 provision.go:177] copyRemoteCerts
	I1225 19:04:10.917579  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:04:10.917618  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:10.936069  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.038600  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:04:11.055874  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1225 19:04:11.074058  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 19:04:11.093042  308802 provision.go:87] duration metric: took 300.759628ms to configureAuth
	I1225 19:04:11.093069  308802 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:04:11.093236  308802 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:11.093327  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.111873  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:11.112095  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:11.112113  308802 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:04:11.418070  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:04:11.418096  308802 machine.go:97] duration metric: took 4.065246722s to provisionDockerMachine
	I1225 19:04:11.418110  308802 start.go:293] postStartSetup for "newest-cni-731832" (driver="docker")
	I1225 19:04:11.418127  308802 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:04:11.418198  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:04:11.418244  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.438383  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.536100  308802 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:04:11.540031  308802 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:04:11.540060  308802 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:04:11.540073  308802 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:04:11.540131  308802 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:04:11.540241  308802 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:04:11.540367  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:04:11.548967  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:11.567699  308802 start.go:296] duration metric: took 149.574945ms for postStartSetup
	I1225 19:04:11.567788  308802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:04:11.567834  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.587734  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.676950  308802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:04:11.682301  308802 fix.go:56] duration metric: took 4.636203469s for fixHost
	I1225 19:04:11.682330  308802 start.go:83] releasing machines lock for "newest-cni-731832", held for 4.636252625s
	I1225 19:04:11.682397  308802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-731832
	I1225 19:04:11.702382  308802 ssh_runner.go:195] Run: cat /version.json
	I1225 19:04:11.702442  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.702450  308802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:04:11.702535  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.728312  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.728520  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.821338  308802 ssh_runner.go:195] Run: systemctl --version
	I1225 19:04:12.269989  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:04:12.270074  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:04:12.270136  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:04:12.298390  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:12.298416  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:04:12.298422  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:12.298427  260034 cri.go:96] found id: ""
	I1225 19:04:12.298436  260034 logs.go:282] 3 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:04:12.298494  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.302241  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.305782  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.309201  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:04:12.309256  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:04:12.338463  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:12.338486  260034 cri.go:96] found id: ""
	I1225 19:04:12.338495  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:04:12.338558  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.343086  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:04:12.343161  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:04:12.372712  260034 cri.go:96] found id: ""
	I1225 19:04:12.372740  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.372752  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:04:12.372760  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:04:12.372810  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:04:12.401198  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:12.401218  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:12.401223  260034 cri.go:96] found id: ""
	I1225 19:04:12.401230  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:04:12.401285  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.404882  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.408479  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:04:12.408547  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:04:12.434675  260034 cri.go:96] found id: ""
	I1225 19:04:12.434705  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.434716  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:12.434723  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:12.434792  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:12.462729  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:04:12.462752  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:12.462758  260034 cri.go:96] found id: ""
	I1225 19:04:12.462767  260034 logs.go:282] 2 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:12.462824  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.466713  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.470287  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:12.470339  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:12.497830  260034 cri.go:96] found id: ""
	I1225 19:04:12.497855  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.497867  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:12.497875  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:12.498008  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:12.525112  260034 cri.go:96] found id: ""
	I1225 19:04:12.525136  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.525147  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:12.525158  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:12.525172  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:12.557871  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:12.557917  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:11.882525  308802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:04:11.919131  308802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:04:11.923720  308802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:04:11.923780  308802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:04:11.932703  308802 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:04:11.932728  308802 start.go:496] detecting cgroup driver to use...
	I1225 19:04:11.932756  308802 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:04:11.932819  308802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:04:11.947054  308802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:04:11.960187  308802 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:04:11.960255  308802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:04:11.973971  308802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:04:11.986465  308802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:04:12.080359  308802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:04:12.183989  308802 docker.go:234] disabling docker service ...
	I1225 19:04:12.184051  308802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:04:12.198883  308802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:04:12.211397  308802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:04:12.288885  308802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:04:12.381673  308802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:04:12.395142  308802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:04:12.410781  308802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:04:12.410842  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.419543  308802 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:04:12.419607  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.428311  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.437991  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.447276  308802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:04:12.455723  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.466045  308802 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.474719  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.483293  308802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:04:12.491554  308802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:04:12.500409  308802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:12.588423  308802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:04:12.723363  308802 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:04:12.723417  308802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:04:12.727494  308802 start.go:574] Will wait 60s for crictl version
	I1225 19:04:12.727558  308802 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.731414  308802 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:04:12.759884  308802 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:04:12.759974  308802 ssh_runner.go:195] Run: crio --version
	I1225 19:04:12.789979  308802 ssh_runner.go:195] Run: crio --version
	I1225 19:04:12.821682  308802 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1225 19:04:12.822811  308802 cli_runner.go:164] Run: docker network inspect newest-cni-731832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:04:12.840743  308802 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1225 19:04:12.845472  308802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:12.858268  308802 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1225 19:04:12.860558  308802 kubeadm.go:884] updating cluster {Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:04:12.860686  308802 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1225 19:04:12.860737  308802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:12.894300  308802 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:12.894327  308802 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:04:12.894393  308802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:12.922290  308802 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:12.922310  308802 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:04:12.922317  308802 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1225 19:04:12.922411  308802 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-731832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:04:12.922487  308802 ssh_runner.go:195] Run: crio config
	I1225 19:04:12.973709  308802 cni.go:84] Creating CNI manager for ""
	I1225 19:04:12.973743  308802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:04:12.973761  308802 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1225 19:04:12.973796  308802 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-731832 NodeName:newest-cni-731832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:04:12.974017  308802 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-731832"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:04:12.974113  308802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1225 19:04:12.983468  308802 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:04:12.983542  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:04:12.992769  308802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1225 19:04:13.006182  308802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 19:04:13.019135  308802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1225 19:04:13.032740  308802 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:04:13.036471  308802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:13.046514  308802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:13.127733  308802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:13.155464  308802 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832 for IP: 192.168.85.2
	I1225 19:04:13.155488  308802 certs.go:195] generating shared ca certs ...
	I1225 19:04:13.155507  308802 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.155669  308802 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:04:13.155727  308802 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:04:13.155749  308802 certs.go:257] generating profile certs ...
	I1225 19:04:13.155855  308802 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/client.key
	I1225 19:04:13.155944  308802 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/apiserver.key.e5cae685
	I1225 19:04:13.156000  308802 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/proxy-client.key
	I1225 19:04:13.156135  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:04:13.156174  308802 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:04:13.156194  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:04:13.156235  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:04:13.156267  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:04:13.156296  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:04:13.156353  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:13.157183  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:04:13.175521  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:04:13.195987  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:04:13.215627  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:04:13.239754  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1225 19:04:13.258932  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:04:13.275724  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:04:13.293394  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:04:13.310335  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:04:13.326933  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:04:13.344129  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:04:13.362482  308802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:04:13.375333  308802 ssh_runner.go:195] Run: openssl version
	I1225 19:04:13.381545  308802 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.389249  308802 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:04:13.396582  308802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.400393  308802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.400455  308802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.434584  308802 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:04:13.442360  308802 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.449776  308802 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:04:13.457767  308802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.461682  308802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.461741  308802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.496673  308802 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:04:13.504785  308802 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.512223  308802 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:04:13.519632  308802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.523420  308802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.523472  308802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.558134  308802 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:04:13.566036  308802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:04:13.569812  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:04:13.605568  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:04:13.640439  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:04:13.681298  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:04:13.723331  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:04:13.765716  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:04:13.825970  308802 kubeadm.go:401] StartCluster: {Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:13.826084  308802 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:04:13.826163  308802 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:04:13.857734  308802 cri.go:96] found id: "e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4"
	I1225 19:04:13.857756  308802 cri.go:96] found id: "75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac"
	I1225 19:04:13.857762  308802 cri.go:96] found id: "f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad"
	I1225 19:04:13.857766  308802 cri.go:96] found id: "7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446"
	I1225 19:04:13.857771  308802 cri.go:96] found id: ""
	I1225 19:04:13.857820  308802 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:04:13.870401  308802 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:13Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:04:13.870466  308802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:04:13.878435  308802 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:04:13.878455  308802 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:04:13.878504  308802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:04:13.886315  308802 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:04:13.887135  308802 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-731832" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:13.887609  308802 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-731832" cluster setting kubeconfig missing "newest-cni-731832" context setting]
	I1225 19:04:13.888350  308802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.890085  308802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:04:13.898296  308802 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1225 19:04:13.898327  308802 kubeadm.go:602] duration metric: took 19.865231ms to restartPrimaryControlPlane
	I1225 19:04:13.898337  308802 kubeadm.go:403] duration metric: took 72.376848ms to StartCluster
	I1225 19:04:13.898353  308802 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.898416  308802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:13.899679  308802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.899939  308802 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:13.900042  308802 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:04:13.900126  308802 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:13.900144  308802 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-731832"
	I1225 19:04:13.900175  308802 addons.go:70] Setting dashboard=true in profile "newest-cni-731832"
	I1225 19:04:13.900196  308802 addons.go:239] Setting addon dashboard=true in "newest-cni-731832"
	W1225 19:04:13.900205  308802 addons.go:248] addon dashboard should already be in state true
	I1225 19:04:13.900179  308802 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-731832"
	I1225 19:04:13.900229  308802 host.go:66] Checking if "newest-cni-731832" exists ...
	W1225 19:04:13.900237  308802 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:04:13.900205  308802 addons.go:70] Setting default-storageclass=true in profile "newest-cni-731832"
	I1225 19:04:13.900269  308802 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-731832"
	I1225 19:04:13.900275  308802 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:04:13.900579  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.900696  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.900736  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.902944  308802 out.go:179] * Verifying Kubernetes components...
	I1225 19:04:13.904040  308802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:13.925979  308802 addons.go:239] Setting addon default-storageclass=true in "newest-cni-731832"
	I1225 19:04:13.925989  308802 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1225 19:04:13.926001  308802 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:04:13.925991  308802 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1225 19:04:13.926030  308802 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:04:13.926636  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.927603  308802 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:13.927733  308802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:04:13.927780  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:13.928838  308802 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:04:09.284018  301873 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-910464" context rescaled to 1 replicas
	W1225 19:04:10.993378  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	W1225 19:04:13.493600  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	I1225 19:04:13.929925  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:04:13.929947  308802 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:04:13.930005  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:13.955992  308802 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:13.956023  308802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:04:13.956090  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:13.963216  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:13.964535  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:13.982963  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:14.048029  308802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:14.062093  308802 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:04:14.062161  308802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:04:14.072280  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:04:14.072301  308802 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:04:14.073713  308802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:14.076233  308802 api_server.go:72] duration metric: took 176.260531ms to wait for apiserver process to appear ...
	I1225 19:04:14.076257  308802 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:04:14.076276  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:14.086693  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:04:14.086718  308802 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:04:14.092278  308802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:14.102279  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:04:14.102300  308802 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:04:14.119336  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:04:14.119360  308802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:04:14.134807  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:04:14.134841  308802 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:04:14.148048  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:04:14.148074  308802 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:04:14.161183  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:04:14.161208  308802 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:04:14.173284  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:04:14.173308  308802 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:04:14.185440  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:04:14.185459  308802 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:04:14.197581  308802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:04:15.623351  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 19:04:15.623386  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 19:04:15.623403  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:15.631321  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 19:04:15.631349  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 19:04:16.076885  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:16.081667  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:04:16.081696  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:04:16.215958  308802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.142210457s)
	I1225 19:04:16.216060  308802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.123751881s)
	I1225 19:04:16.216180  308802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018561095s)
	I1225 19:04:16.219528  308802 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-731832 addons enable metrics-server
	
	I1225 19:04:16.227649  308802 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1225 19:04:11.597312  310133 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-960022" ...
	I1225 19:04:11.597387  310133 cli_runner.go:164] Run: docker start default-k8s-diff-port-960022
	I1225 19:04:11.857671  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:11.877172  310133 kic.go:430] container "default-k8s-diff-port-960022" state is running.
	I1225 19:04:11.877665  310133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:04:11.897052  310133 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:04:11.897365  310133 machine.go:94] provisionDockerMachine start ...
	I1225 19:04:11.897455  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:11.916569  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:11.916937  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:11.916957  310133 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:04:11.917573  310133 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35982->127.0.0.1:33108: read: connection reset by peer
	I1225 19:04:15.049669  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:04:15.049702  310133 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-960022"
	I1225 19:04:15.049766  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.070050  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:15.070335  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:15.070352  310133 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-960022 && echo "default-k8s-diff-port-960022" | sudo tee /etc/hostname
	I1225 19:04:15.210061  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:04:15.210143  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.229048  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:15.229352  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:15.229380  310133 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-960022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-960022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-960022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:04:15.363887  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:04:15.363946  310133 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:04:15.363966  310133 ubuntu.go:190] setting up certificates
	I1225 19:04:15.363976  310133 provision.go:84] configureAuth start
	I1225 19:04:15.364023  310133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:04:15.383280  310133 provision.go:143] copyHostCerts
	I1225 19:04:15.383371  310133 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:04:15.383392  310133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:04:15.383482  310133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:04:15.383620  310133 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:04:15.383634  310133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:04:15.383674  310133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:04:15.383771  310133 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:04:15.383781  310133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:04:15.383825  310133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:04:15.383941  310133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-960022 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-960022 localhost minikube]
	I1225 19:04:15.504561  310133 provision.go:177] copyRemoteCerts
	I1225 19:04:15.504618  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:04:15.504660  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.527735  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:15.642310  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:04:15.685007  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1225 19:04:15.716090  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:04:15.740078  310133 provision.go:87] duration metric: took 376.087982ms to configureAuth
	I1225 19:04:15.740114  310133 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:04:15.740325  310133 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:15.740453  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.761999  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:15.762207  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:15.762228  310133 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:04:16.118477  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:04:16.118506  310133 machine.go:97] duration metric: took 4.221121275s to provisionDockerMachine
	I1225 19:04:16.118520  310133 start.go:293] postStartSetup for "default-k8s-diff-port-960022" (driver="docker")
	I1225 19:04:16.118533  310133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:04:16.118597  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:04:16.118639  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.139042  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.233678  310133 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:04:16.237378  310133 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:04:16.237401  310133 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:04:16.237410  310133 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:04:16.237460  310133 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:04:16.237537  310133 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:04:16.237630  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:04:16.246190  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:16.266938  310133 start.go:296] duration metric: took 148.402747ms for postStartSetup
	I1225 19:04:16.267041  310133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:04:16.267087  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.286860  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.376250  310133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:04:16.380615  310133 fix.go:56] duration metric: took 4.805100942s for fixHost
	I1225 19:04:16.380644  310133 start.go:83] releasing machines lock for "default-k8s-diff-port-960022", held for 4.80515252s
	I1225 19:04:16.380707  310133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:04:16.230995  308802 addons.go:530] duration metric: took 2.330959709s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1225 19:04:16.576872  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:16.582258  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:04:16.582286  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:04:17.077064  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:17.081546  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:04:17.082588  308802 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:04:17.082616  308802 api_server.go:131] duration metric: took 3.006351181s to wait for apiserver health ...
	I1225 19:04:17.082627  308802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:04:17.086311  308802 system_pods.go:59] 8 kube-system pods found
	I1225 19:04:17.086343  308802 system_pods.go:61] "coredns-7d764666f9-hsm6h" [650e5fe1-fc5a-4f59-86ae-9bee4f454a6c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1225 19:04:17.086351  308802 system_pods.go:61] "etcd-newest-cni-731832" [5dd7d1d7-ba36-4070-b68a-e45da3f0a4e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:04:17.086362  308802 system_pods.go:61] "kindnet-l587m" [6a88d1e0-b81d-4b51-a2dd-283548deb416] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:04:17.086371  308802 system_pods.go:61] "kube-apiserver-newest-cni-731832" [ec1a8903-a48a-4dd4-a9c9-2b44931f0f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:04:17.086377  308802 system_pods.go:61] "kube-controller-manager-newest-cni-731832" [0f388c1f-3938-4912-8aa7-4cd5c107b62a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:04:17.086386  308802 system_pods.go:61] "kube-proxy-gnqfh" [7a8b403f-215a-402e-80a0-8c070cdc4875] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:04:17.086393  308802 system_pods.go:61] "kube-scheduler-newest-cni-731832" [7fa22a28-98a7-4b81-8660-fa3e637a8d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:04:17.086400  308802 system_pods.go:61] "storage-provisioner" [c0825e53-f743-4887-ab64-13e5553dca5f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1225 19:04:17.086411  308802 system_pods.go:74] duration metric: took 3.772553ms to wait for pod list to return data ...
	I1225 19:04:17.086420  308802 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:04:17.088719  308802 default_sa.go:45] found service account: "default"
	I1225 19:04:17.088737  308802 default_sa.go:55] duration metric: took 2.310133ms for default service account to be created ...
	I1225 19:04:17.088747  308802 kubeadm.go:587] duration metric: took 3.188778368s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 19:04:17.088763  308802 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:04:17.090886  308802 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:04:17.090926  308802 node_conditions.go:123] node cpu capacity is 8
	I1225 19:04:17.090944  308802 node_conditions.go:105] duration metric: took 2.174956ms to run NodePressure ...
	I1225 19:04:17.090958  308802 start.go:242] waiting for startup goroutines ...
	I1225 19:04:17.090975  308802 start.go:247] waiting for cluster config update ...
	I1225 19:04:17.090994  308802 start.go:256] writing updated cluster config ...
	I1225 19:04:17.091241  308802 ssh_runner.go:195] Run: rm -f paused
	I1225 19:04:17.141619  308802 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:04:17.144303  308802 out.go:179] * Done! kubectl is now configured to use "newest-cni-731832" cluster and "default" namespace by default
	I1225 19:04:16.398226  310133 ssh_runner.go:195] Run: cat /version.json
	I1225 19:04:16.398273  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.398322  310133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:04:16.398385  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.416283  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.417655  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.504284  310133 ssh_runner.go:195] Run: systemctl --version
	I1225 19:04:16.564467  310133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:04:16.605355  310133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:04:16.610648  310133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:04:16.610719  310133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:04:16.620669  310133 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:04:16.620697  310133 start.go:496] detecting cgroup driver to use...
	I1225 19:04:16.620736  310133 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:04:16.620799  310133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:04:16.638659  310133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:04:16.653060  310133 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:04:16.653133  310133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:04:16.671670  310133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:04:16.686735  310133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:04:16.791798  310133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:04:16.882077  310133 docker.go:234] disabling docker service ...
	I1225 19:04:16.882140  310133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:04:16.896437  310133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:04:16.909102  310133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:04:16.996695  310133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:04:17.082415  310133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:04:17.096574  310133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:04:17.114802  310133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:04:17.114867  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.123529  310133 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:04:17.123607  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.132390  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.141573  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.150190  310133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:04:17.157938  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.169834  310133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.179731  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.188952  310133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:04:17.197045  310133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:04:17.205792  310133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:17.300845  310133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:04:17.433231  310133 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:04:17.433306  310133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:04:17.438202  310133 start.go:574] Will wait 60s for crictl version
	I1225 19:04:17.438266  310133 ssh_runner.go:195] Run: which crictl
	I1225 19:04:17.442941  310133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:04:17.468383  310133 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:04:17.468461  310133 ssh_runner.go:195] Run: crio --version
	I1225 19:04:17.498633  310133 ssh_runner.go:195] Run: crio --version
	I1225 19:04:17.530717  310133 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1225 19:04:12.592669  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:12.592702  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:12.625173  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:12.625199  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:12.652502  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:04:12.652526  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:04:12.679345  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:12.679391  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:12.734993  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:12.735025  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:12.825049  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:04:12.825076  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:04:12.857511  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:12.857537  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:12.888675  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:12.888701  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:12.919091  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:12.919135  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:12.953107  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:12.953137  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:12.969636  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:12.969678  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 19:04:17.531876  310133 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:04:17.554280  310133 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1225 19:04:17.559039  310133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:17.570497  310133 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:04:17.570608  310133 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:17.570651  310133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:17.608005  310133 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:17.608027  310133 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:04:17.608070  310133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:17.638724  310133 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:17.638750  310133 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:04:17.638759  310133 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1225 19:04:17.638924  310133 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-960022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:04:17.639025  310133 ssh_runner.go:195] Run: crio config
	I1225 19:04:17.693222  310133 cni.go:84] Creating CNI manager for ""
	I1225 19:04:17.693250  310133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:04:17.693267  310133 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:04:17.693298  310133 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-960022 NodeName:default-k8s-diff-port-960022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:04:17.693419  310133 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-960022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:04:17.693489  310133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:04:17.702443  310133 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:04:17.702509  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:04:17.711085  310133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1225 19:04:17.724554  310133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:04:17.739239  310133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1225 19:04:17.752708  310133 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:04:17.756382  310133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:17.766012  310133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:17.852884  310133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:17.875868  310133 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022 for IP: 192.168.103.2
	I1225 19:04:17.875921  310133 certs.go:195] generating shared ca certs ...
	I1225 19:04:17.875959  310133 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:17.876141  310133 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:04:17.876213  310133 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:04:17.876233  310133 certs.go:257] generating profile certs ...
	I1225 19:04:17.876358  310133 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key
	I1225 19:04:17.876738  310133 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c
	I1225 19:04:17.876825  310133 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key
	I1225 19:04:17.877000  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:04:17.877043  310133 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:04:17.877054  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:04:17.877090  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:04:17.877122  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:04:17.877157  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:04:17.877212  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:17.878542  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:04:17.902470  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:04:17.922753  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:04:17.944259  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:04:17.968484  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1225 19:04:17.990349  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:04:18.007777  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:04:18.024328  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:04:18.042555  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:04:18.060404  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:04:18.078655  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:04:18.102349  310133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:04:18.115413  310133 ssh_runner.go:195] Run: openssl version
	I1225 19:04:18.121430  310133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.129110  310133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:04:18.137184  310133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.140995  310133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.141057  310133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.176707  310133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:04:18.184835  310133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.192735  310133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:04:18.200610  310133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.204731  310133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.204783  310133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.244400  310133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:04:18.253481  310133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.261795  310133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:04:18.269481  310133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.273356  310133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.273422  310133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.308838  310133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:04:18.316595  310133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:04:18.320334  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:04:18.355507  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:04:18.393013  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:04:18.438330  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:04:18.481636  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:04:18.539097  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:04:18.599548  310133 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:18.599652  310133 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:04:18.599724  310133 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:04:18.635382  310133 cri.go:96] found id: "deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3"
	I1225 19:04:18.635505  310133 cri.go:96] found id: "d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee"
	I1225 19:04:18.635513  310133 cri.go:96] found id: "e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d"
	I1225 19:04:18.635526  310133 cri.go:96] found id: "354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09"
	I1225 19:04:18.635531  310133 cri.go:96] found id: ""
	I1225 19:04:18.635599  310133 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:04:18.650167  310133 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:18Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:04:18.650231  310133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:04:18.659169  310133 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:04:18.659186  310133 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:04:18.659238  310133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:04:18.667126  310133 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:04:18.668148  310133 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-960022" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:18.668771  310133 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-960022" cluster setting kubeconfig missing "default-k8s-diff-port-960022" context setting]
	I1225 19:04:18.669448  310133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:18.671055  310133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:04:18.685888  310133 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1225 19:04:18.686116  310133 kubeadm.go:602] duration metric: took 26.923081ms to restartPrimaryControlPlane
	I1225 19:04:18.686135  310133 kubeadm.go:403] duration metric: took 86.591882ms to StartCluster
	I1225 19:04:18.686154  310133 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:18.686220  310133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:18.688138  310133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:18.688490  310133 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:18.688743  310133 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:04:18.689005  310133 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-960022"
	I1225 19:04:18.689026  310133 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-960022"
	W1225 19:04:18.689034  310133 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:04:18.689060  310133 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:04:18.689589  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.689758  310133 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-960022"
	I1225 19:04:18.689776  310133 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-960022"
	W1225 19:04:18.689784  310133 addons.go:248] addon dashboard should already be in state true
	I1225 19:04:18.689809  310133 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:04:18.690354  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.688955  310133 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:18.690605  310133 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-960022"
	I1225 19:04:18.690622  310133 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-960022"
	I1225 19:04:18.690928  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.691378  310133 out.go:179] * Verifying Kubernetes components...
	I1225 19:04:18.692413  310133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:18.720470  310133 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:04:18.721612  310133 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:04:18.721631  310133 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1225 19:04:15.493636  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	W1225 19:04:17.993795  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.531030472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.533287922Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2ad17cf5-ac34-475d-abfc-03e4e3ee4bdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.533928593Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b21e68a9-fa6b-41e1-90b6-813a39a806f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.534843428Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.535318676Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.535454258Z" level=info msg="Ran pod sandbox ed8cd62b93faa62fa90cffa91ea9826d4e395a8d77bdaa1617fe7961d6bdf824 with infra container: kube-system/kindnet-l587m/POD" id=2ad17cf5-ac34-475d-abfc-03e4e3ee4bdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.536135758Z" level=info msg="Ran pod sandbox 0349fb644bfb09dca1e06d207ac1671c600d5cf3494ce2cbf2891dc03db4a8f1 with infra container: kube-system/kube-proxy-gnqfh/POD" id=b21e68a9-fa6b-41e1-90b6-813a39a806f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.536630522Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=1e6a5f43-1119-45ac-9536-2312ea7f39e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.537167146Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bb28eb2c-15c3-46eb-9023-b5e4782a90f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.537585808Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d03161b5-7b37-4f4c-a87c-be93a606599d name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.5380723Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=eaba4d41-3250-47aa-b735-58bac4e5d22c name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.538691774Z" level=info msg="Creating container: kube-system/kindnet-l587m/kindnet-cni" id=c2600e3c-04dd-4a76-86ed-2e545a2059c5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.538790937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.538958309Z" level=info msg="Creating container: kube-system/kube-proxy-gnqfh/kube-proxy" id=191a6cc7-0175-4467-8bd4-6f6e23e57f4a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.539042512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.544235771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.544670961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.544837147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.545174535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.568410086Z" level=info msg="Created container 30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6: kube-system/kindnet-l587m/kindnet-cni" id=c2600e3c-04dd-4a76-86ed-2e545a2059c5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.568981221Z" level=info msg="Starting container: 30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6" id=0cb6bd3c-06d5-4578-b4a9-d530822e3f3c name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.57069111Z" level=info msg="Started container" PID=1053 containerID=30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6 description=kube-system/kindnet-l587m/kindnet-cni id=0cb6bd3c-06d5-4578-b4a9-d530822e3f3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed8cd62b93faa62fa90cffa91ea9826d4e395a8d77bdaa1617fe7961d6bdf824
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.571748706Z" level=info msg="Created container 32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596: kube-system/kube-proxy-gnqfh/kube-proxy" id=191a6cc7-0175-4467-8bd4-6f6e23e57f4a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.572327485Z" level=info msg="Starting container: 32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596" id=b2a3768a-3a8c-4847-b999-2d252d9586aa name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.575135804Z" level=info msg="Started container" PID=1054 containerID=32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596 description=kube-system/kube-proxy-gnqfh/kube-proxy id=b2a3768a-3a8c-4847-b999-2d252d9586aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=0349fb644bfb09dca1e06d207ac1671c600d5cf3494ce2cbf2891dc03db4a8f1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	32ba98d006f6f       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   4 seconds ago       Running             kube-proxy                1                   0349fb644bfb0       kube-proxy-gnqfh                            kube-system
	30a747c2e4c47       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   ed8cd62b93faa       kindnet-l587m                               kube-system
	e02cd2fcac3d7       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   7 seconds ago       Running             kube-scheduler            1                   27d73f44e000a       kube-scheduler-newest-cni-731832            kube-system
	75fd7f6e481e8       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   7 seconds ago       Running             kube-apiserver            1                   c0e91b6faa2d1       kube-apiserver-newest-cni-731832            kube-system
	f7d1c87d00202       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   5ac0368980e19       etcd-newest-cni-731832                      kube-system
	7cd3b0eb1fd2e       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   7 seconds ago       Running             kube-controller-manager   1                   0ac3b1b4a9a8e       kube-controller-manager-newest-cni-731832   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-731832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-731832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=newest-cni-731832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_03_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:03:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-731832
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:04:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-731832
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                5b8d2f7a-018b-4c55-9c9b-3d6cf6b9276f
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-731832                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-l587m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-731832             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-731832    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-gnqfh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-731832             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node newest-cni-731832 event: Registered Node newest-cni-731832 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-731832 event: Registered Node newest-cni-731832 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad] <==
	{"level":"info","ts":"2025-12-25T19:04:13.815416Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:04:13.815467Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:04:13.815564Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:04:13.815604Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:04:13.815729Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-25T19:04:13.815843Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-25T19:04:13.815966Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-25T19:04:14.705434Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-25T19:04:14.705540Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:04:14.705627Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-25T19:04:14.705649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:04:14.705669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.706416Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.706446Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:04:14.706462Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.706470Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.707224Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-731832 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:04:14.707250Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:04:14.707246Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:04:14.707582Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:04:14.707660Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:04:14.708626Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:04:14.708606Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:04:14.711638Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-25T19:04:14.711710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 19:04:21 up 46 min,  0 user,  load average: 4.02, 2.83, 1.96
	Linux newest-cni-731832 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6] <==
	I1225 19:04:16.798184       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:04:16.798419       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1225 19:04:16.798531       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:04:16.798560       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:04:16.798591       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:04:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:04:16.999658       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:04:16.999699       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:04:16.999716       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:04:16.999856       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:04:17.496699       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:04:17.496738       1 metrics.go:72] Registering metrics
	I1225 19:04:17.496968       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac] <==
	I1225 19:04:15.708532       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.710773       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.708572       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.710005       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1225 19:04:15.710021       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1225 19:04:15.711513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:04:15.715294       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:04:15.716219       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:04:15.727043       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1225 19:04:15.732611       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.732634       1 policy_source.go:248] refreshing policies
	I1225 19:04:15.742853       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:04:15.751841       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:04:15.983460       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:04:16.014937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:04:16.032821       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:04:16.039468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:04:16.048287       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:04:16.083386       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.98.203"}
	I1225 19:04:16.094549       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.42.120"}
	I1225 19:04:16.613035       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1225 19:04:19.285165       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:04:19.434581       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:04:19.484377       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:04:19.535691       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446] <==
	I1225 19:04:18.881616       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.884548       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1225 19:04:18.884565       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1225 19:04:18.881667       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.881694       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.868747       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.868656       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.867754       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.883184       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.868872       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.892183       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.881682       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.894874       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.897414       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.897744       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.897801       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.899464       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.899525       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.899550       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900011       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900022       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900027       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900278       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900562       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.952232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596] <==
	I1225 19:04:16.614081       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:04:16.670193       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:04:16.770967       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:16.771012       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1225 19:04:16.771140       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:04:16.792619       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:04:16.792683       1 server_linux.go:136] "Using iptables Proxier"
	I1225 19:04:16.799016       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:04:16.799518       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1225 19:04:16.799604       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:16.802200       1 config.go:309] "Starting node config controller"
	I1225 19:04:16.802222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:04:16.802579       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:04:16.802591       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:04:16.802617       1 config.go:200] "Starting service config controller"
	I1225 19:04:16.802624       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:04:16.802654       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:04:16.802672       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:04:16.903080       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:04:16.903115       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 19:04:16.903152       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:04:16.903248       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4] <==
	I1225 19:04:14.075958       1 serving.go:386] Generated self-signed cert in-memory
	W1225 19:04:15.646277       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 19:04:15.646324       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 19:04:15.646337       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 19:04:15.646346       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 19:04:15.681232       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1225 19:04:15.681273       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:15.685489       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:04:15.685708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:04:15.686562       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:04:15.686580       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:04:15.786842       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.831274     674 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.831302     674 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.832150     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.837647     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-731832\" already exists" pod="kube-system/etcd-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.837681     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.845749     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-731832\" already exists" pod="kube-system/kube-apiserver-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.845789     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.851919     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-731832\" already exists" pod="kube-system/kube-controller-manager-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.851951     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.858451     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-731832\" already exists" pod="kube-system/kube-scheduler-newest-cni-731832"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.221029     674 apiserver.go:52] "Watching apiserver"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.225733     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-731832" containerName="kube-controller-manager"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.230858     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253036     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-xtables-lock\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253257     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a8b403f-215a-402e-80a0-8c070cdc4875-xtables-lock\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253312     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a8b403f-215a-402e-80a0-8c070cdc4875-lib-modules\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253409     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-cni-cfg\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253448     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-lib-modules\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.261499     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-731832" containerName="kube-scheduler"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.261615     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-731832" containerName="etcd"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.261838     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-731832" containerName="kube-apiserver"
	Dec 25 19:04:18 newest-cni-731832 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:04:18 newest-cni-731832 kubelet[674]: I1225 19:04:18.165288     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 25 19:04:18 newest-cni-731832 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:04:18 newest-cni-731832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-731832 -n newest-cni-731832
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-731832 -n newest-cni-731832: exit status 2 (333.411675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-731832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4: exit status 1 (60.633735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-hsm6h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-xmcmm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qz6h4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-731832
helpers_test.go:244: (dbg) docker inspect newest-cni-731832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81",
	        "Created": "2025-12-25T19:03:37.514242235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309027,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:04:07.094478298Z",
	            "FinishedAt": "2025-12-25T19:04:06.156184287Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/hosts",
	        "LogPath": "/var/lib/docker/containers/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81/0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81-json.log",
	        "Name": "/newest-cni-731832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-731832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-731832",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d7dffda1d2c4721b68cb1c1ffbf33c95c8a8bd29b65c76f162d82b8c375ce81",
	                "LowerDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5cd8bb494ab04f4dcb5a30632bc8011864511df29c5ed2fb3f9b7b62d5e6d92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-731832",
	                "Source": "/var/lib/docker/volumes/newest-cni-731832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-731832",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-731832",
	                "name.minikube.sigs.k8s.io": "newest-cni-731832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "137e7e11c1af0c255dc0bba4c9516b4e31185bd3b67b32c2456c89d52efc61f8",
	            "SandboxKey": "/var/run/docker/netns/137e7e11c1af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-731832": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "360ef2d655feed4b5ef1f2b45737dda354b50d02cd936b222228be43a9a6ef2b",
	                    "EndpointID": "8b4726365c05d8bfa7fb609f5719653f2e5ca5c46e531d275990249ae5c87ff2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "72:60:42:6f:d4:ea",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-731832",
	                        "0d7dffda1d2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832: exit status 2 (338.146141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-731832 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p old-k8s-version-163446 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │                     │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:02 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p old-k8s-version-163446                                                                                                                                                                                                                          │ old-k8s-version-163446       │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p disable-driver-mounts-102827                                                                                                                                                                                                                    │ disable-driver-mounts-102827 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ no-preload-148352 image list --format=json                                                                                                                                                                                                         │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p no-preload-148352 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p no-preload-148352                                                                                                                                                                                                                               │ no-preload-148352            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ image   │ embed-certs-684693 image list --format=json                                                                                                                                                                                                        │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ pause   │ -p embed-certs-684693 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ delete  │ -p embed-certs-684693                                                                                                                                                                                                                              │ embed-certs-684693           │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:03 UTC │
	│ start   │ -p auto-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-960022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-960022 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:04 UTC │
	│ addons  │ enable metrics-server -p newest-cni-731832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │                     │
	│ stop    │ -p newest-cni-731832 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:03 UTC │ 25 Dec 25 19:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-731832 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ start   │ -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-960022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ start   │ -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ image   │ newest-cni-731832 image list --format=json                                                                                                                                                                                                         │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ pause   │ -p newest-cni-731832 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-731832            │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:04:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:04:11.384704  310133 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:04:11.384841  310133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:11.384852  310133 out.go:374] Setting ErrFile to fd 2...
	I1225 19:04:11.384859  310133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:11.385184  310133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:04:11.385734  310133 out.go:368] Setting JSON to false
	I1225 19:04:11.386982  310133 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2799,"bootTime":1766686652,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:04:11.387047  310133 start.go:143] virtualization: kvm guest
	I1225 19:04:11.389145  310133 out.go:179] * [default-k8s-diff-port-960022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:04:11.391235  310133 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:04:11.391228  310133 notify.go:221] Checking for updates...
	I1225 19:04:11.394351  310133 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:04:11.396029  310133 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:11.397662  310133 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:04:11.399180  310133 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:04:11.400803  310133 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:04:11.403098  310133 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:11.403833  310133 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:04:11.429851  310133 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:04:11.429947  310133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:11.487527  310133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 19:04:11.476936748 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:11.487639  310133 docker.go:319] overlay module found
	I1225 19:04:11.490226  310133 out.go:179] * Using the docker driver based on existing profile
	I1225 19:04:11.491362  310133 start.go:309] selected driver: docker
	I1225 19:04:11.491378  310133 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:11.491474  310133 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:04:11.492179  310133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:11.545943  310133 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 19:04:11.536358471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:11.546224  310133 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:04:11.546256  310133 cni.go:84] Creating CNI manager for ""
	I1225 19:04:11.546303  310133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:04:11.546336  310133 start.go:353] cluster config:
	{Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:11.548400  310133 out.go:179] * Starting "default-k8s-diff-port-960022" primary control-plane node in "default-k8s-diff-port-960022" cluster
	I1225 19:04:11.549704  310133 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:04:11.550989  310133 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:04:11.552134  310133 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:11.552173  310133 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:04:11.552184  310133 cache.go:65] Caching tarball of preloaded images
	I1225 19:04:11.552256  310133 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:04:11.552257  310133 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:04:11.552266  310133 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:04:11.552424  310133 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:04:11.575323  310133 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:04:11.575353  310133 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:04:11.575370  310133 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:04:11.575405  310133 start.go:360] acquireMachinesLock for default-k8s-diff-port-960022: {Name:mk439ca411b17a34361cdf557c6ddd774780f327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:04:11.575480  310133 start.go:364] duration metric: took 40.957µs to acquireMachinesLock for "default-k8s-diff-port-960022"
	I1225 19:04:11.575501  310133 start.go:96] Skipping create...Using existing machine configuration
	I1225 19:04:11.575508  310133 fix.go:54] fixHost starting: 
	I1225 19:04:11.575810  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:11.595262  310133 fix.go:112] recreateIfNeeded on default-k8s-diff-port-960022: state=Stopped err=<nil>
	W1225 19:04:11.595310  310133 fix.go:138] unexpected machine state, will restart: <nil>
	I1225 19:04:07.067071  308802 out.go:252] * Restarting existing docker container for "newest-cni-731832" ...
	I1225 19:04:07.067149  308802 cli_runner.go:164] Run: docker start newest-cni-731832
	I1225 19:04:07.313050  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:07.331810  308802 kic.go:430] container "newest-cni-731832" state is running.
	I1225 19:04:07.332186  308802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-731832
	I1225 19:04:07.352635  308802 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/config.json ...
	I1225 19:04:07.352835  308802 machine.go:94] provisionDockerMachine start ...
	I1225 19:04:07.352994  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:07.372105  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:07.372327  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:07.372339  308802 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:04:07.373023  308802 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45246->127.0.0.1:33103: read: connection reset by peer
	I1225 19:04:10.497866  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-731832
	
	I1225 19:04:10.497918  308802 ubuntu.go:182] provisioning hostname "newest-cni-731832"
	I1225 19:04:10.497994  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:10.515077  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:10.515352  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:10.515371  308802 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-731832 && echo "newest-cni-731832" | sudo tee /etc/hostname
	I1225 19:04:10.649767  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-731832
	
	I1225 19:04:10.649841  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:10.669578  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:10.669786  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:10.669803  308802 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-731832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-731832/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-731832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:04:10.792176  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:04:10.792208  308802 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:04:10.792253  308802 ubuntu.go:190] setting up certificates
	I1225 19:04:10.792265  308802 provision.go:84] configureAuth start
	I1225 19:04:10.792313  308802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-731832
	I1225 19:04:10.811145  308802 provision.go:143] copyHostCerts
	I1225 19:04:10.811234  308802 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:04:10.811251  308802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:04:10.811325  308802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:04:10.811425  308802 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:04:10.811433  308802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:04:10.811459  308802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:04:10.811558  308802 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:04:10.811575  308802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:04:10.811603  308802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:04:10.811678  308802 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.newest-cni-731832 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-731832]
	I1225 19:04:10.917517  308802 provision.go:177] copyRemoteCerts
	I1225 19:04:10.917579  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:04:10.917618  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:10.936069  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.038600  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:04:11.055874  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1225 19:04:11.074058  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1225 19:04:11.093042  308802 provision.go:87] duration metric: took 300.759628ms to configureAuth
	I1225 19:04:11.093069  308802 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:04:11.093236  308802 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:11.093327  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.111873  308802 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:11.112095  308802 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1225 19:04:11.112113  308802 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:04:11.418070  308802 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:04:11.418096  308802 machine.go:97] duration metric: took 4.065246722s to provisionDockerMachine
	I1225 19:04:11.418110  308802 start.go:293] postStartSetup for "newest-cni-731832" (driver="docker")
	I1225 19:04:11.418127  308802 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:04:11.418198  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:04:11.418244  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.438383  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.536100  308802 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:04:11.540031  308802 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:04:11.540060  308802 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:04:11.540073  308802 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:04:11.540131  308802 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:04:11.540241  308802 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:04:11.540367  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:04:11.548967  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:11.567699  308802 start.go:296] duration metric: took 149.574945ms for postStartSetup
	I1225 19:04:11.567788  308802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:04:11.567834  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.587734  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.676950  308802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:04:11.682301  308802 fix.go:56] duration metric: took 4.636203469s for fixHost
	I1225 19:04:11.682330  308802 start.go:83] releasing machines lock for "newest-cni-731832", held for 4.636252625s
	I1225 19:04:11.682397  308802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-731832
	I1225 19:04:11.702382  308802 ssh_runner.go:195] Run: cat /version.json
	I1225 19:04:11.702442  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.702450  308802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:04:11.702535  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:11.728312  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.728520  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:11.821338  308802 ssh_runner.go:195] Run: systemctl --version
	I1225 19:04:12.269989  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1225 19:04:12.270074  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:04:12.270136  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:04:12.298390  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:12.298416  260034 cri.go:96] found id: "1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:04:12.298422  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:12.298427  260034 cri.go:96] found id: ""
	I1225 19:04:12.298436  260034 logs.go:282] 3 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:04:12.298494  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.302241  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.305782  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.309201  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:04:12.309256  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:04:12.338463  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:12.338486  260034 cri.go:96] found id: ""
	I1225 19:04:12.338495  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:04:12.338558  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.343086  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:04:12.343161  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:04:12.372712  260034 cri.go:96] found id: ""
	I1225 19:04:12.372740  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.372752  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:04:12.372760  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:04:12.372810  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:04:12.401198  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:12.401218  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:12.401223  260034 cri.go:96] found id: ""
	I1225 19:04:12.401230  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:04:12.401285  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.404882  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.408479  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:04:12.408547  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:04:12.434675  260034 cri.go:96] found id: ""
	I1225 19:04:12.434705  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.434716  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:12.434723  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:12.434792  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:12.462729  260034 cri.go:96] found id: "0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:04:12.462752  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:12.462758  260034 cri.go:96] found id: ""
	I1225 19:04:12.462767  260034 logs.go:282] 2 containers: [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:12.462824  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.466713  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.470287  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:12.470339  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:12.497830  260034 cri.go:96] found id: ""
	I1225 19:04:12.497855  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.497867  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:12.497875  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:12.498008  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:12.525112  260034 cri.go:96] found id: ""
	I1225 19:04:12.525136  260034 logs.go:282] 0 containers: []
	W1225 19:04:12.525147  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:12.525158  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:12.525172  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:12.557871  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:12.557917  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:11.882525  308802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:04:11.919131  308802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:04:11.923720  308802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:04:11.923780  308802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:04:11.932703  308802 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:04:11.932728  308802 start.go:496] detecting cgroup driver to use...
	I1225 19:04:11.932756  308802 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:04:11.932819  308802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:04:11.947054  308802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:04:11.960187  308802 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:04:11.960255  308802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:04:11.973971  308802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:04:11.986465  308802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:04:12.080359  308802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:04:12.183989  308802 docker.go:234] disabling docker service ...
	I1225 19:04:12.184051  308802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:04:12.198883  308802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:04:12.211397  308802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:04:12.288885  308802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:04:12.381673  308802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:04:12.395142  308802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:04:12.410781  308802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:04:12.410842  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.419543  308802 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:04:12.419607  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.428311  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.437991  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.447276  308802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:04:12.455723  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.466045  308802 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.474719  308802 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:12.483293  308802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:04:12.491554  308802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:04:12.500409  308802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:12.588423  308802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:04:12.723363  308802 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:04:12.723417  308802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:04:12.727494  308802 start.go:574] Will wait 60s for crictl version
	I1225 19:04:12.727558  308802 ssh_runner.go:195] Run: which crictl
	I1225 19:04:12.731414  308802 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:04:12.759884  308802 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:04:12.759974  308802 ssh_runner.go:195] Run: crio --version
	I1225 19:04:12.789979  308802 ssh_runner.go:195] Run: crio --version
	I1225 19:04:12.821682  308802 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1225 19:04:12.822811  308802 cli_runner.go:164] Run: docker network inspect newest-cni-731832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:04:12.840743  308802 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1225 19:04:12.845472  308802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:12.858268  308802 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1225 19:04:12.860558  308802 kubeadm.go:884] updating cluster {Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:04:12.860686  308802 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1225 19:04:12.860737  308802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:12.894300  308802 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:12.894327  308802 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:04:12.894393  308802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:12.922290  308802 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:12.922310  308802 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:04:12.922317  308802 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 crio true true} ...
	I1225 19:04:12.922411  308802 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-731832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:04:12.922487  308802 ssh_runner.go:195] Run: crio config
	I1225 19:04:12.973709  308802 cni.go:84] Creating CNI manager for ""
	I1225 19:04:12.973743  308802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:04:12.973761  308802 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1225 19:04:12.973796  308802 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-731832 NodeName:newest-cni-731832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:04:12.974017  308802 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-731832"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:04:12.974113  308802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1225 19:04:12.983468  308802 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:04:12.983542  308802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:04:12.992769  308802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1225 19:04:13.006182  308802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1225 19:04:13.019135  308802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1225 19:04:13.032740  308802 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:04:13.036471  308802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:13.046514  308802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:13.127733  308802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:13.155464  308802 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832 for IP: 192.168.85.2
	I1225 19:04:13.155488  308802 certs.go:195] generating shared ca certs ...
	I1225 19:04:13.155507  308802 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.155669  308802 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:04:13.155727  308802 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:04:13.155749  308802 certs.go:257] generating profile certs ...
	I1225 19:04:13.155855  308802 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/client.key
	I1225 19:04:13.155944  308802 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/apiserver.key.e5cae685
	I1225 19:04:13.156000  308802 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/proxy-client.key
	I1225 19:04:13.156135  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:04:13.156174  308802 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:04:13.156194  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:04:13.156235  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:04:13.156267  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:04:13.156296  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:04:13.156353  308802 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:13.157183  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:04:13.175521  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:04:13.195987  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:04:13.215627  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:04:13.239754  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1225 19:04:13.258932  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:04:13.275724  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:04:13.293394  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/newest-cni-731832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:04:13.310335  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:04:13.326933  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:04:13.344129  308802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:04:13.362482  308802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:04:13.375333  308802 ssh_runner.go:195] Run: openssl version
	I1225 19:04:13.381545  308802 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.389249  308802 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:04:13.396582  308802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.400393  308802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.400455  308802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:13.434584  308802 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:04:13.442360  308802 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.449776  308802 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:04:13.457767  308802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.461682  308802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.461741  308802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:04:13.496673  308802 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:04:13.504785  308802 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.512223  308802 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:04:13.519632  308802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.523420  308802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.523472  308802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:04:13.558134  308802 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:04:13.566036  308802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:04:13.569812  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:04:13.605568  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:04:13.640439  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:04:13.681298  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:04:13.723331  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:04:13.765716  308802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:04:13.825970  308802 kubeadm.go:401] StartCluster: {Name:newest-cni-731832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-731832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:13.826084  308802 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:04:13.826163  308802 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:04:13.857734  308802 cri.go:96] found id: "e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4"
	I1225 19:04:13.857756  308802 cri.go:96] found id: "75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac"
	I1225 19:04:13.857762  308802 cri.go:96] found id: "f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad"
	I1225 19:04:13.857766  308802 cri.go:96] found id: "7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446"
	I1225 19:04:13.857771  308802 cri.go:96] found id: ""
	I1225 19:04:13.857820  308802 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:04:13.870401  308802 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:13Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:04:13.870466  308802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:04:13.878435  308802 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:04:13.878455  308802 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:04:13.878504  308802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:04:13.886315  308802 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:04:13.887135  308802 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-731832" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:13.887609  308802 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-731832" cluster setting kubeconfig missing "newest-cni-731832" context setting]
	I1225 19:04:13.888350  308802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.890085  308802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:04:13.898296  308802 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1225 19:04:13.898327  308802 kubeadm.go:602] duration metric: took 19.865231ms to restartPrimaryControlPlane
	I1225 19:04:13.898337  308802 kubeadm.go:403] duration metric: took 72.376848ms to StartCluster
	I1225 19:04:13.898353  308802 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.898416  308802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:13.899679  308802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:13.899939  308802 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:13.900042  308802 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:04:13.900126  308802 config.go:182] Loaded profile config "newest-cni-731832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:13.900144  308802 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-731832"
	I1225 19:04:13.900175  308802 addons.go:70] Setting dashboard=true in profile "newest-cni-731832"
	I1225 19:04:13.900196  308802 addons.go:239] Setting addon dashboard=true in "newest-cni-731832"
	W1225 19:04:13.900205  308802 addons.go:248] addon dashboard should already be in state true
	I1225 19:04:13.900179  308802 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-731832"
	I1225 19:04:13.900229  308802 host.go:66] Checking if "newest-cni-731832" exists ...
	W1225 19:04:13.900237  308802 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:04:13.900205  308802 addons.go:70] Setting default-storageclass=true in profile "newest-cni-731832"
	I1225 19:04:13.900269  308802 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-731832"
	I1225 19:04:13.900275  308802 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:04:13.900579  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.900696  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.900736  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.902944  308802 out.go:179] * Verifying Kubernetes components...
	I1225 19:04:13.904040  308802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:13.925979  308802 addons.go:239] Setting addon default-storageclass=true in "newest-cni-731832"
	I1225 19:04:13.925989  308802 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1225 19:04:13.926001  308802 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:04:13.925991  308802 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1225 19:04:13.926030  308802 host.go:66] Checking if "newest-cni-731832" exists ...
	I1225 19:04:13.926636  308802 cli_runner.go:164] Run: docker container inspect newest-cni-731832 --format={{.State.Status}}
	I1225 19:04:13.927603  308802 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:13.927733  308802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:04:13.927780  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:13.928838  308802 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:04:09.284018  301873 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-910464" context rescaled to 1 replicas
	W1225 19:04:10.993378  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	W1225 19:04:13.493600  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	I1225 19:04:13.929925  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:04:13.929947  308802 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:04:13.930005  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:13.955992  308802 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:13.956023  308802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:04:13.956090  308802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-731832
	I1225 19:04:13.963216  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:13.964535  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:13.982963  308802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/newest-cni-731832/id_rsa Username:docker}
	I1225 19:04:14.048029  308802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:14.062093  308802 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:04:14.062161  308802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:04:14.072280  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:04:14.072301  308802 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:04:14.073713  308802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:14.076233  308802 api_server.go:72] duration metric: took 176.260531ms to wait for apiserver process to appear ...
	I1225 19:04:14.076257  308802 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:04:14.076276  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:14.086693  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:04:14.086718  308802 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:04:14.092278  308802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:14.102279  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:04:14.102300  308802 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:04:14.119336  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:04:14.119360  308802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:04:14.134807  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:04:14.134841  308802 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:04:14.148048  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:04:14.148074  308802 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:04:14.161183  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:04:14.161208  308802 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:04:14.173284  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:04:14.173308  308802 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:04:14.185440  308802 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:04:14.185459  308802 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:04:14.197581  308802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:04:15.623351  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 19:04:15.623386  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 19:04:15.623403  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:15.631321  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1225 19:04:15.631349  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1225 19:04:16.076885  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:16.081667  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:04:16.081696  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:04:16.215958  308802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.142210457s)
	I1225 19:04:16.216060  308802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.123751881s)
	I1225 19:04:16.216180  308802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018561095s)
	I1225 19:04:16.219528  308802 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-731832 addons enable metrics-server
	
	I1225 19:04:16.227649  308802 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1225 19:04:11.597312  310133 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-960022" ...
	I1225 19:04:11.597387  310133 cli_runner.go:164] Run: docker start default-k8s-diff-port-960022
	I1225 19:04:11.857671  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:11.877172  310133 kic.go:430] container "default-k8s-diff-port-960022" state is running.
	I1225 19:04:11.877665  310133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:04:11.897052  310133 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/config.json ...
	I1225 19:04:11.897365  310133 machine.go:94] provisionDockerMachine start ...
	I1225 19:04:11.897455  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:11.916569  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:11.916937  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:11.916957  310133 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:04:11.917573  310133 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35982->127.0.0.1:33108: read: connection reset by peer
	I1225 19:04:15.049669  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:04:15.049702  310133 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-960022"
	I1225 19:04:15.049766  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.070050  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:15.070335  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:15.070352  310133 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-960022 && echo "default-k8s-diff-port-960022" | sudo tee /etc/hostname
	I1225 19:04:15.210061  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-960022
	
	I1225 19:04:15.210143  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.229048  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:15.229352  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:15.229380  310133 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-960022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-960022/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-960022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:04:15.363887  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:04:15.363946  310133 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:04:15.363966  310133 ubuntu.go:190] setting up certificates
	I1225 19:04:15.363976  310133 provision.go:84] configureAuth start
	I1225 19:04:15.364023  310133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:04:15.383280  310133 provision.go:143] copyHostCerts
	I1225 19:04:15.383371  310133 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:04:15.383392  310133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:04:15.383482  310133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:04:15.383620  310133 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:04:15.383634  310133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:04:15.383674  310133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:04:15.383771  310133 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:04:15.383781  310133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:04:15.383825  310133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:04:15.383941  310133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-960022 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-960022 localhost minikube]
	I1225 19:04:15.504561  310133 provision.go:177] copyRemoteCerts
	I1225 19:04:15.504618  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:04:15.504660  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.527735  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:15.642310  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:04:15.685007  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1225 19:04:15.716090  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:04:15.740078  310133 provision.go:87] duration metric: took 376.087982ms to configureAuth
	I1225 19:04:15.740114  310133 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:04:15.740325  310133 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:15.740453  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:15.761999  310133 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:15.762207  310133 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1225 19:04:15.762228  310133 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:04:16.118477  310133 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:04:16.118506  310133 machine.go:97] duration metric: took 4.221121275s to provisionDockerMachine
	I1225 19:04:16.118520  310133 start.go:293] postStartSetup for "default-k8s-diff-port-960022" (driver="docker")
	I1225 19:04:16.118533  310133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:04:16.118597  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:04:16.118639  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.139042  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.233678  310133 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:04:16.237378  310133 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:04:16.237401  310133 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:04:16.237410  310133 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:04:16.237460  310133 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:04:16.237537  310133 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:04:16.237630  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:04:16.246190  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:16.266938  310133 start.go:296] duration metric: took 148.402747ms for postStartSetup
	I1225 19:04:16.267041  310133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:04:16.267087  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.286860  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.376250  310133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:04:16.380615  310133 fix.go:56] duration metric: took 4.805100942s for fixHost
	I1225 19:04:16.380644  310133 start.go:83] releasing machines lock for "default-k8s-diff-port-960022", held for 4.80515252s
	I1225 19:04:16.380707  310133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-960022
	I1225 19:04:16.230995  308802 addons.go:530] duration metric: took 2.330959709s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1225 19:04:16.576872  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:16.582258  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:04:16.582286  308802 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:04:17.077064  308802 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:04:17.081546  308802 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:04:17.082588  308802 api_server.go:141] control plane version: v1.35.0-rc.1
	I1225 19:04:17.082616  308802 api_server.go:131] duration metric: took 3.006351181s to wait for apiserver health ...
	I1225 19:04:17.082627  308802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:04:17.086311  308802 system_pods.go:59] 8 kube-system pods found
	I1225 19:04:17.086343  308802 system_pods.go:61] "coredns-7d764666f9-hsm6h" [650e5fe1-fc5a-4f59-86ae-9bee4f454a6c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1225 19:04:17.086351  308802 system_pods.go:61] "etcd-newest-cni-731832" [5dd7d1d7-ba36-4070-b68a-e45da3f0a4e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1225 19:04:17.086362  308802 system_pods.go:61] "kindnet-l587m" [6a88d1e0-b81d-4b51-a2dd-283548deb416] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1225 19:04:17.086371  308802 system_pods.go:61] "kube-apiserver-newest-cni-731832" [ec1a8903-a48a-4dd4-a9c9-2b44931f0f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1225 19:04:17.086377  308802 system_pods.go:61] "kube-controller-manager-newest-cni-731832" [0f388c1f-3938-4912-8aa7-4cd5c107b62a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1225 19:04:17.086386  308802 system_pods.go:61] "kube-proxy-gnqfh" [7a8b403f-215a-402e-80a0-8c070cdc4875] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1225 19:04:17.086393  308802 system_pods.go:61] "kube-scheduler-newest-cni-731832" [7fa22a28-98a7-4b81-8660-fa3e637a8d0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1225 19:04:17.086400  308802 system_pods.go:61] "storage-provisioner" [c0825e53-f743-4887-ab64-13e5553dca5f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1225 19:04:17.086411  308802 system_pods.go:74] duration metric: took 3.772553ms to wait for pod list to return data ...
	I1225 19:04:17.086420  308802 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:04:17.088719  308802 default_sa.go:45] found service account: "default"
	I1225 19:04:17.088737  308802 default_sa.go:55] duration metric: took 2.310133ms for default service account to be created ...
	I1225 19:04:17.088747  308802 kubeadm.go:587] duration metric: took 3.188778368s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1225 19:04:17.088763  308802 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:04:17.090886  308802 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:04:17.090926  308802 node_conditions.go:123] node cpu capacity is 8
	I1225 19:04:17.090944  308802 node_conditions.go:105] duration metric: took 2.174956ms to run NodePressure ...
	I1225 19:04:17.090958  308802 start.go:242] waiting for startup goroutines ...
	I1225 19:04:17.090975  308802 start.go:247] waiting for cluster config update ...
	I1225 19:04:17.090994  308802 start.go:256] writing updated cluster config ...
	I1225 19:04:17.091241  308802 ssh_runner.go:195] Run: rm -f paused
	I1225 19:04:17.141619  308802 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1225 19:04:17.144303  308802 out.go:179] * Done! kubectl is now configured to use "newest-cni-731832" cluster and "default" namespace by default
	I1225 19:04:16.398226  310133 ssh_runner.go:195] Run: cat /version.json
	I1225 19:04:16.398273  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.398322  310133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:04:16.398385  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:16.416283  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.417655  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:16.504284  310133 ssh_runner.go:195] Run: systemctl --version
	I1225 19:04:16.564467  310133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:04:16.605355  310133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:04:16.610648  310133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:04:16.610719  310133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:04:16.620669  310133 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1225 19:04:16.620697  310133 start.go:496] detecting cgroup driver to use...
	I1225 19:04:16.620736  310133 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:04:16.620799  310133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:04:16.638659  310133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:04:16.653060  310133 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:04:16.653133  310133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:04:16.671670  310133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:04:16.686735  310133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:04:16.791798  310133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:04:16.882077  310133 docker.go:234] disabling docker service ...
	I1225 19:04:16.882140  310133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:04:16.896437  310133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:04:16.909102  310133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:04:16.996695  310133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:04:17.082415  310133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:04:17.096574  310133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:04:17.114802  310133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:04:17.114867  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.123529  310133 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:04:17.123607  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.132390  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.141573  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.150190  310133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:04:17.157938  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.169834  310133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.179731  310133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:04:17.188952  310133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:04:17.197045  310133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:04:17.205792  310133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:17.300845  310133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:04:17.433231  310133 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:04:17.433306  310133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:04:17.438202  310133 start.go:574] Will wait 60s for crictl version
	I1225 19:04:17.438266  310133 ssh_runner.go:195] Run: which crictl
	I1225 19:04:17.442941  310133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:04:17.468383  310133 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:04:17.468461  310133 ssh_runner.go:195] Run: crio --version
	I1225 19:04:17.498633  310133 ssh_runner.go:195] Run: crio --version
	I1225 19:04:17.530717  310133 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1225 19:04:12.592669  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:12.592702  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:12.625173  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:12.625199  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:12.652502  260034 logs.go:123] Gathering logs for kube-controller-manager [0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d] ...
	I1225 19:04:12.652526  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0887dc59ddb6090de10772f8a1f50ab0d2afe1586d7c8d118ffeef1810deb88d"
	I1225 19:04:12.679345  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:12.679391  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:12.734993  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:12.735025  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:12.825049  260034 logs.go:123] Gathering logs for kube-apiserver [1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa] ...
	I1225 19:04:12.825076  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d4a54214a1675f6c77eff77964da947e4f00cff45e007ad6adb0c898c3e76fa"
	I1225 19:04:12.857511  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:12.857537  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:12.888675  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:12.888701  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:12.919091  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:12.919135  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:12.953107  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:12.953137  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:12.969636  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:12.969678  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1225 19:04:17.531876  310133 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-960022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:04:17.554280  310133 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1225 19:04:17.559039  310133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:17.570497  310133 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:04:17.570608  310133 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:17.570651  310133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:17.608005  310133 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:17.608027  310133 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:04:17.608070  310133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:04:17.638724  310133 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:04:17.638750  310133 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:04:17.638759  310133 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1225 19:04:17.638924  310133 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-960022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1225 19:04:17.639025  310133 ssh_runner.go:195] Run: crio config
	I1225 19:04:17.693222  310133 cni.go:84] Creating CNI manager for ""
	I1225 19:04:17.693250  310133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1225 19:04:17.693267  310133 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:04:17.693298  310133 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-960022 NodeName:default-k8s-diff-port-960022 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:04:17.693419  310133 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-960022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:04:17.693489  310133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:04:17.702443  310133 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:04:17.702509  310133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:04:17.711085  310133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1225 19:04:17.724554  310133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:04:17.739239  310133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1225 19:04:17.752708  310133 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:04:17.756382  310133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:04:17.766012  310133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:17.852884  310133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:17.875868  310133 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022 for IP: 192.168.103.2
	I1225 19:04:17.875921  310133 certs.go:195] generating shared ca certs ...
	I1225 19:04:17.875959  310133 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:17.876141  310133 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:04:17.876213  310133 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:04:17.876233  310133 certs.go:257] generating profile certs ...
	I1225 19:04:17.876358  310133 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/client.key
	I1225 19:04:17.876738  310133 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key.a3ef6c0c
	I1225 19:04:17.876825  310133 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key
	I1225 19:04:17.877000  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:04:17.877043  310133 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:04:17.877054  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:04:17.877090  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:04:17.877122  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:04:17.877157  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:04:17.877212  310133 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:04:17.878542  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:04:17.902470  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:04:17.922753  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:04:17.944259  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:04:17.968484  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1225 19:04:17.990349  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:04:18.007777  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:04:18.024328  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/default-k8s-diff-port-960022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:04:18.042555  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:04:18.060404  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:04:18.078655  310133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:04:18.102349  310133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:04:18.115413  310133 ssh_runner.go:195] Run: openssl version
	I1225 19:04:18.121430  310133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.129110  310133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:04:18.137184  310133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.140995  310133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.141057  310133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:04:18.176707  310133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:04:18.184835  310133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.192735  310133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:04:18.200610  310133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.204731  310133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.204783  310133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:04:18.244400  310133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:04:18.253481  310133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.261795  310133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:04:18.269481  310133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.273356  310133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.273422  310133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:04:18.308838  310133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:04:18.316595  310133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:04:18.320334  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1225 19:04:18.355507  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1225 19:04:18.393013  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1225 19:04:18.438330  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1225 19:04:18.481636  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1225 19:04:18.539097  310133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1225 19:04:18.599548  310133 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-960022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-960022 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:18.599652  310133 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:04:18.599724  310133 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:04:18.635382  310133 cri.go:96] found id: "deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3"
	I1225 19:04:18.635505  310133 cri.go:96] found id: "d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee"
	I1225 19:04:18.635513  310133 cri.go:96] found id: "e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d"
	I1225 19:04:18.635526  310133 cri.go:96] found id: "354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09"
	I1225 19:04:18.635531  310133 cri.go:96] found id: ""
	I1225 19:04:18.635599  310133 ssh_runner.go:195] Run: sudo runc list -f json
	W1225 19:04:18.650167  310133 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:04:18Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:04:18.650231  310133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:04:18.659169  310133 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1225 19:04:18.659186  310133 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1225 19:04:18.659238  310133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1225 19:04:18.667126  310133 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1225 19:04:18.668148  310133 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-960022" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:18.668771  310133 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-5579/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-960022" cluster setting kubeconfig missing "default-k8s-diff-port-960022" context setting]
	I1225 19:04:18.669448  310133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:18.671055  310133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1225 19:04:18.685888  310133 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1225 19:04:18.686116  310133 kubeadm.go:602] duration metric: took 26.923081ms to restartPrimaryControlPlane
	I1225 19:04:18.686135  310133 kubeadm.go:403] duration metric: took 86.591882ms to StartCluster
	I1225 19:04:18.686154  310133 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:18.686220  310133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:18.688138  310133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:18.688490  310133 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:18.688743  310133 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:04:18.689005  310133 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-960022"
	I1225 19:04:18.689026  310133 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-960022"
	W1225 19:04:18.689034  310133 addons.go:248] addon storage-provisioner should already be in state true
	I1225 19:04:18.689060  310133 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:04:18.689589  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.689758  310133 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-960022"
	I1225 19:04:18.689776  310133 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-960022"
	W1225 19:04:18.689784  310133 addons.go:248] addon dashboard should already be in state true
	I1225 19:04:18.689809  310133 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:04:18.690354  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.688955  310133 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:18.690605  310133 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-960022"
	I1225 19:04:18.690622  310133 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-960022"
	I1225 19:04:18.690928  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.691378  310133 out.go:179] * Verifying Kubernetes components...
	I1225 19:04:18.692413  310133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:18.720470  310133 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1225 19:04:18.721612  310133 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:04:18.721631  310133 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1225 19:04:15.493636  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	W1225 19:04:17.993795  301873 node_ready.go:57] node "auto-910464" has "Ready":"False" status (will retry)
	I1225 19:04:18.722140  310133 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-960022"
	W1225 19:04:18.722162  310133 addons.go:248] addon default-storageclass should already be in state true
	I1225 19:04:18.722189  310133 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:04:18.722647  310133 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:04:18.722701  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1225 19:04:18.722716  310133 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1225 19:04:18.722701  310133 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:18.722800  310133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:04:18.722836  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:18.722779  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:18.760541  310133 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:18.760563  310133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:04:18.760634  310133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:04:18.766449  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:18.766517  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:18.792933  310133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:04:18.888618  310133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:18.914439  310133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:18.919460  310133 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-960022" to be "Ready" ...
	I1225 19:04:18.928507  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1225 19:04:18.928655  310133 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1225 19:04:18.931860  310133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:18.949412  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1225 19:04:18.949446  310133 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1225 19:04:18.970793  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1225 19:04:18.970821  310133 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1225 19:04:18.989987  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1225 19:04:18.990009  310133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1225 19:04:19.009275  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1225 19:04:19.009314  310133 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1225 19:04:19.024188  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1225 19:04:19.024212  310133 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1225 19:04:19.038818  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1225 19:04:19.038844  310133 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1225 19:04:19.052425  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1225 19:04:19.052449  310133 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1225 19:04:19.066054  310133 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:04:19.066084  310133 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1225 19:04:19.080719  310133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1225 19:04:20.352977  310133 node_ready.go:49] node "default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:20.353016  310133 node_ready.go:38] duration metric: took 1.433509167s for node "default-k8s-diff-port-960022" to be "Ready" ...
	I1225 19:04:20.353033  310133 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:04:20.353084  310133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:04:20.942472  310133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.027992905s)
	I1225 19:04:20.942562  310133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.010668999s)
	I1225 19:04:20.942700  310133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.861934096s)
	I1225 19:04:20.942739  310133 api_server.go:72] duration metric: took 2.254184864s to wait for apiserver process to appear ...
	I1225 19:04:20.942753  310133 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:04:20.942773  310133 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1225 19:04:20.946523  310133 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-960022 addons enable metrics-server
	
	I1225 19:04:20.947924  310133 api_server.go:325] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1225 19:04:20.947954  310133 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1225 19:04:20.950155  310133 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1225 19:04:20.952593  310133 addons.go:530] duration metric: took 2.263854279s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.531030472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.533287922Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2ad17cf5-ac34-475d-abfc-03e4e3ee4bdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.533928593Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b21e68a9-fa6b-41e1-90b6-813a39a806f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.534843428Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.535318676Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.535454258Z" level=info msg="Ran pod sandbox ed8cd62b93faa62fa90cffa91ea9826d4e395a8d77bdaa1617fe7961d6bdf824 with infra container: kube-system/kindnet-l587m/POD" id=2ad17cf5-ac34-475d-abfc-03e4e3ee4bdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.536135758Z" level=info msg="Ran pod sandbox 0349fb644bfb09dca1e06d207ac1671c600d5cf3494ce2cbf2891dc03db4a8f1 with infra container: kube-system/kube-proxy-gnqfh/POD" id=b21e68a9-fa6b-41e1-90b6-813a39a806f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.536630522Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=1e6a5f43-1119-45ac-9536-2312ea7f39e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.537167146Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=bb28eb2c-15c3-46eb-9023-b5e4782a90f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.537585808Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d03161b5-7b37-4f4c-a87c-be93a606599d name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.5380723Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=eaba4d41-3250-47aa-b735-58bac4e5d22c name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.538691774Z" level=info msg="Creating container: kube-system/kindnet-l587m/kindnet-cni" id=c2600e3c-04dd-4a76-86ed-2e545a2059c5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.538790937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.538958309Z" level=info msg="Creating container: kube-system/kube-proxy-gnqfh/kube-proxy" id=191a6cc7-0175-4467-8bd4-6f6e23e57f4a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.539042512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.544235771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.544670961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.544837147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.545174535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.568410086Z" level=info msg="Created container 30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6: kube-system/kindnet-l587m/kindnet-cni" id=c2600e3c-04dd-4a76-86ed-2e545a2059c5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.568981221Z" level=info msg="Starting container: 30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6" id=0cb6bd3c-06d5-4578-b4a9-d530822e3f3c name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.57069111Z" level=info msg="Started container" PID=1053 containerID=30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6 description=kube-system/kindnet-l587m/kindnet-cni id=0cb6bd3c-06d5-4578-b4a9-d530822e3f3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed8cd62b93faa62fa90cffa91ea9826d4e395a8d77bdaa1617fe7961d6bdf824
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.571748706Z" level=info msg="Created container 32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596: kube-system/kube-proxy-gnqfh/kube-proxy" id=191a6cc7-0175-4467-8bd4-6f6e23e57f4a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.572327485Z" level=info msg="Starting container: 32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596" id=b2a3768a-3a8c-4847-b999-2d252d9586aa name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:16 newest-cni-731832 crio[524]: time="2025-12-25T19:04:16.575135804Z" level=info msg="Started container" PID=1054 containerID=32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596 description=kube-system/kube-proxy-gnqfh/kube-proxy id=b2a3768a-3a8c-4847-b999-2d252d9586aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=0349fb644bfb09dca1e06d207ac1671c600d5cf3494ce2cbf2891dc03db4a8f1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	32ba98d006f6f       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   6 seconds ago       Running             kube-proxy                1                   0349fb644bfb0       kube-proxy-gnqfh                            kube-system
	30a747c2e4c47       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   6 seconds ago       Running             kindnet-cni               1                   ed8cd62b93faa       kindnet-l587m                               kube-system
	e02cd2fcac3d7       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   9 seconds ago       Running             kube-scheduler            1                   27d73f44e000a       kube-scheduler-newest-cni-731832            kube-system
	75fd7f6e481e8       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   9 seconds ago       Running             kube-apiserver            1                   c0e91b6faa2d1       kube-apiserver-newest-cni-731832            kube-system
	f7d1c87d00202       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   9 seconds ago       Running             etcd                      1                   5ac0368980e19       etcd-newest-cni-731832                      kube-system
	7cd3b0eb1fd2e       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   9 seconds ago       Running             kube-controller-manager   1                   0ac3b1b4a9a8e       kube-controller-manager-newest-cni-731832   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-731832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-731832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=newest-cni-731832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_03_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:03:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-731832
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:04:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 25 Dec 2025 19:04:15 +0000   Thu, 25 Dec 2025 19:03:44 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-731832
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                5b8d2f7a-018b-4c55-9c9b-3d6cf6b9276f
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-731832                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-l587m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-731832             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-731832    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-gnqfh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-731832             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-731832 event: Registered Node newest-cni-731832 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-731832 event: Registered Node newest-cni-731832 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f7d1c87d0020257be0bb0226c540e4432cc1529072a6a6a02e9610ce7d2a72ad] <==
	{"level":"info","ts":"2025-12-25T19:04:13.815416Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:04:13.815467Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-25T19:04:13.815564Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:04:13.815604Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-25T19:04:13.815729Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-25T19:04:13.815843Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-25T19:04:13.815966Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-25T19:04:14.705434Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-25T19:04:14.705540Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-25T19:04:14.705627Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-25T19:04:14.705649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:04:14.705669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.706416Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.706446Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-25T19:04:14.706462Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.706470Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-25T19:04:14.707224Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-731832 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-25T19:04:14.707250Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:04:14.707246Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-25T19:04:14.707582Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-25T19:04:14.707660Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-25T19:04:14.708626Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:04:14.708606Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-25T19:04:14.711638Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-25T19:04:14.711710Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 19:04:23 up 46 min,  0 user,  load average: 3.94, 2.83, 1.96
	Linux newest-cni-731832 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [30a747c2e4c477b43905a2ae570c93b6cc50fa6dc00fdd514232650211e0a2b6] <==
	I1225 19:04:16.798184       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:04:16.798419       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1225 19:04:16.798531       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:04:16.798560       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:04:16.798591       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:04:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:04:16.999658       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:04:16.999699       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:04:16.999716       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:04:16.999856       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:04:17.496699       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:04:17.496738       1 metrics.go:72] Registering metrics
	I1225 19:04:17.496968       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [75fd7f6e481e82625456301d656dce65b6f0292112145825cd68747d96e652ac] <==
	I1225 19:04:15.708532       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.710773       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.708572       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.710005       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1225 19:04:15.710021       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1225 19:04:15.711513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:04:15.715294       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1225 19:04:15.716219       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:04:15.727043       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1225 19:04:15.732611       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:15.732634       1 policy_source.go:248] refreshing policies
	I1225 19:04:15.742853       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:04:15.751841       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:04:15.983460       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:04:16.014937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:04:16.032821       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:04:16.039468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:04:16.048287       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:04:16.083386       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.98.203"}
	I1225 19:04:16.094549       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.42.120"}
	I1225 19:04:16.613035       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1225 19:04:19.285165       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:04:19.434581       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1225 19:04:19.484377       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:04:19.535691       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7cd3b0eb1fd2e4969002541b2f4ae25ee7229906d8fe3533bb4ab750efb6b446] <==
	I1225 19:04:18.881616       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.884548       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1225 19:04:18.884565       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1225 19:04:18.881667       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.881694       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.868747       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.868656       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.867754       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.883184       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.868872       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.892183       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.881682       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.894874       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.897414       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.897744       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.897801       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.899464       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.899525       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.899550       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900011       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900022       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900027       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900278       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.900562       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:18.952232       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [32ba98d006f6f3a3154c40ff151535abf5952d3effea067df2b776e9329f7596] <==
	I1225 19:04:16.614081       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:04:16.670193       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:04:16.770967       1 shared_informer.go:377] "Caches are synced"
	I1225 19:04:16.771012       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1225 19:04:16.771140       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:04:16.792619       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:04:16.792683       1 server_linux.go:136] "Using iptables Proxier"
	I1225 19:04:16.799016       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:04:16.799518       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1225 19:04:16.799604       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:16.802200       1 config.go:309] "Starting node config controller"
	I1225 19:04:16.802222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:04:16.802579       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:04:16.802591       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:04:16.802617       1 config.go:200] "Starting service config controller"
	I1225 19:04:16.802624       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:04:16.802654       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:04:16.802672       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:04:16.903080       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:04:16.903115       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1225 19:04:16.903152       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:04:16.903248       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e02cd2fcac3d735d321c341c2fba7aabc974e0d4826fa67f14fd79754e0c64c4] <==
	I1225 19:04:14.075958       1 serving.go:386] Generated self-signed cert in-memory
	W1225 19:04:15.646277       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 19:04:15.646324       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 19:04:15.646337       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 19:04:15.646346       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 19:04:15.681232       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1225 19:04:15.681273       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:15.685489       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:04:15.685708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:04:15.686562       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:04:15.686580       1 shared_informer.go:370] "Waiting for caches to sync"
	I1225 19:04:15.786842       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.831274     674 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.831302     674 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.832150     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.837647     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-731832\" already exists" pod="kube-system/etcd-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.837681     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.845749     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-731832\" already exists" pod="kube-system/kube-apiserver-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.845789     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.851919     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-731832\" already exists" pod="kube-system/kube-controller-manager-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: I1225 19:04:15.851951     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-731832"
	Dec 25 19:04:15 newest-cni-731832 kubelet[674]: E1225 19:04:15.858451     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-731832\" already exists" pod="kube-system/kube-scheduler-newest-cni-731832"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.221029     674 apiserver.go:52] "Watching apiserver"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.225733     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-731832" containerName="kube-controller-manager"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.230858     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253036     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-xtables-lock\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253257     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a8b403f-215a-402e-80a0-8c070cdc4875-xtables-lock\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253312     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a8b403f-215a-402e-80a0-8c070cdc4875-lib-modules\") pod \"kube-proxy-gnqfh\" (UID: \"7a8b403f-215a-402e-80a0-8c070cdc4875\") " pod="kube-system/kube-proxy-gnqfh"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253409     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-cni-cfg\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: I1225 19:04:16.253448     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a88d1e0-b81d-4b51-a2dd-283548deb416-lib-modules\") pod \"kindnet-l587m\" (UID: \"6a88d1e0-b81d-4b51-a2dd-283548deb416\") " pod="kube-system/kindnet-l587m"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.261499     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-731832" containerName="kube-scheduler"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.261615     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-731832" containerName="etcd"
	Dec 25 19:04:16 newest-cni-731832 kubelet[674]: E1225 19:04:16.261838     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-731832" containerName="kube-apiserver"
	Dec 25 19:04:18 newest-cni-731832 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:04:18 newest-cni-731832 kubelet[674]: I1225 19:04:18.165288     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 25 19:04:18 newest-cni-731832 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:04:18 newest-cni-731832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-731832 -n newest-cni-731832
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-731832 -n newest-cni-731832: exit status 2 (347.89509ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-731832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4: exit status 1 (65.447471ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-hsm6h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-xmcmm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qz6h4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-731832 describe pod coredns-7d764666f9-hsm6h storage-provisioner dashboard-metrics-scraper-867fb5f87b-xmcmm kubernetes-dashboard-b84665fb8-qz6h4: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-960022 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-960022 --alsologtostderr -v=1: exit status 80 (2.484231516s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-960022 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 19:05:08.832141  328997 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:05:08.832271  328997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:05:08.832278  328997 out.go:374] Setting ErrFile to fd 2...
	I1225 19:05:08.832284  328997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:05:08.832528  328997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:05:08.832787  328997 out.go:368] Setting JSON to false
	I1225 19:05:08.832813  328997 mustload.go:66] Loading cluster: default-k8s-diff-port-960022
	I1225 19:05:08.833239  328997 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:05:08.833823  328997 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-960022 --format={{.State.Status}}
	I1225 19:05:08.858380  328997 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:05:08.858847  328997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:05:08.927446  328997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-25 19:05:08.913990841 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:05:08.928292  328997 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766570787-22316/minikube-v1.37.0-1766570787-22316-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766570787-22316-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-960022 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1225 19:05:08.931365  328997 out.go:179] * Pausing node default-k8s-diff-port-960022 ... 
	I1225 19:05:08.932522  328997 host.go:66] Checking if "default-k8s-diff-port-960022" exists ...
	I1225 19:05:08.932789  328997 ssh_runner.go:195] Run: systemctl --version
	I1225 19:05:08.932839  328997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-960022
	I1225 19:05:08.964378  328997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/default-k8s-diff-port-960022/id_rsa Username:docker}
	I1225 19:05:09.070651  328997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:05:09.088332  328997 pause.go:52] kubelet running: true
	I1225 19:05:09.088395  328997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:05:09.297851  328997 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:05:09.297973  328997 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:05:09.413350  328997 cri.go:96] found id: "5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb"
	I1225 19:05:09.413376  328997 cri.go:96] found id: "fdbf81a94147e6e035a27f9d8d605db6a96cbbbddbd65b9f768e335d836bedb5"
	I1225 19:05:09.413382  328997 cri.go:96] found id: "f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5"
	I1225 19:05:09.413387  328997 cri.go:96] found id: "3aa3159c3178dba42f58b963940a73d87ed0b361760a6b4cda22ce96594b70b9"
	I1225 19:05:09.413392  328997 cri.go:96] found id: "132f0bde2b6bf2854770419c66dbc956a1f62dbc7f3be89c002b08f5c1f6eaa0"
	I1225 19:05:09.413398  328997 cri.go:96] found id: "deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3"
	I1225 19:05:09.413402  328997 cri.go:96] found id: "d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee"
	I1225 19:05:09.413407  328997 cri.go:96] found id: "e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d"
	I1225 19:05:09.413410  328997 cri.go:96] found id: "354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09"
	I1225 19:05:09.413420  328997 cri.go:96] found id: "14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	I1225 19:05:09.413423  328997 cri.go:96] found id: "d0ee12735cd4db3a4f33b6c01940acfb704c79ae33d33dd565e52a63afdb2b14"
	I1225 19:05:09.413426  328997 cri.go:96] found id: ""
	I1225 19:05:09.413465  328997 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:05:09.425383  328997 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:05:09Z" level=error msg="open /run/runc: no such file or directory"
	I1225 19:05:09.577645  328997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:05:09.593574  328997 pause.go:52] kubelet running: false
	I1225 19:05:09.593644  328997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:05:09.747990  328997 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:05:09.748079  328997 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:05:09.815112  328997 cri.go:96] found id: "5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb"
	I1225 19:05:09.815137  328997 cri.go:96] found id: "fdbf81a94147e6e035a27f9d8d605db6a96cbbbddbd65b9f768e335d836bedb5"
	I1225 19:05:09.815142  328997 cri.go:96] found id: "f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5"
	I1225 19:05:09.815145  328997 cri.go:96] found id: "3aa3159c3178dba42f58b963940a73d87ed0b361760a6b4cda22ce96594b70b9"
	I1225 19:05:09.815148  328997 cri.go:96] found id: "132f0bde2b6bf2854770419c66dbc956a1f62dbc7f3be89c002b08f5c1f6eaa0"
	I1225 19:05:09.815152  328997 cri.go:96] found id: "deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3"
	I1225 19:05:09.815155  328997 cri.go:96] found id: "d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee"
	I1225 19:05:09.815158  328997 cri.go:96] found id: "e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d"
	I1225 19:05:09.815161  328997 cri.go:96] found id: "354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09"
	I1225 19:05:09.815173  328997 cri.go:96] found id: "14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	I1225 19:05:09.815179  328997 cri.go:96] found id: "d0ee12735cd4db3a4f33b6c01940acfb704c79ae33d33dd565e52a63afdb2b14"
	I1225 19:05:09.815181  328997 cri.go:96] found id: ""
	I1225 19:05:09.815217  328997 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:05:10.381088  328997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:05:10.393853  328997 pause.go:52] kubelet running: false
	I1225 19:05:10.393941  328997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:05:10.569269  328997 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:05:10.569366  328997 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:05:10.645050  328997 cri.go:96] found id: "5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb"
	I1225 19:05:10.645078  328997 cri.go:96] found id: "fdbf81a94147e6e035a27f9d8d605db6a96cbbbddbd65b9f768e335d836bedb5"
	I1225 19:05:10.645085  328997 cri.go:96] found id: "f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5"
	I1225 19:05:10.645091  328997 cri.go:96] found id: "3aa3159c3178dba42f58b963940a73d87ed0b361760a6b4cda22ce96594b70b9"
	I1225 19:05:10.645096  328997 cri.go:96] found id: "132f0bde2b6bf2854770419c66dbc956a1f62dbc7f3be89c002b08f5c1f6eaa0"
	I1225 19:05:10.645106  328997 cri.go:96] found id: "deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3"
	I1225 19:05:10.645110  328997 cri.go:96] found id: "d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee"
	I1225 19:05:10.645114  328997 cri.go:96] found id: "e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d"
	I1225 19:05:10.645118  328997 cri.go:96] found id: "354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09"
	I1225 19:05:10.645139  328997 cri.go:96] found id: "14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	I1225 19:05:10.645148  328997 cri.go:96] found id: "d0ee12735cd4db3a4f33b6c01940acfb704c79ae33d33dd565e52a63afdb2b14"
	I1225 19:05:10.645152  328997 cri.go:96] found id: ""
	I1225 19:05:10.645251  328997 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:05:11.008190  328997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:05:11.020948  328997 pause.go:52] kubelet running: false
	I1225 19:05:11.021027  328997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1225 19:05:11.161798  328997 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1225 19:05:11.161877  328997 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1225 19:05:11.224888  328997 cri.go:96] found id: "5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb"
	I1225 19:05:11.224950  328997 cri.go:96] found id: "fdbf81a94147e6e035a27f9d8d605db6a96cbbbddbd65b9f768e335d836bedb5"
	I1225 19:05:11.224954  328997 cri.go:96] found id: "f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5"
	I1225 19:05:11.224958  328997 cri.go:96] found id: "3aa3159c3178dba42f58b963940a73d87ed0b361760a6b4cda22ce96594b70b9"
	I1225 19:05:11.224961  328997 cri.go:96] found id: "132f0bde2b6bf2854770419c66dbc956a1f62dbc7f3be89c002b08f5c1f6eaa0"
	I1225 19:05:11.224964  328997 cri.go:96] found id: "deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3"
	I1225 19:05:11.224967  328997 cri.go:96] found id: "d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee"
	I1225 19:05:11.224969  328997 cri.go:96] found id: "e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d"
	I1225 19:05:11.224972  328997 cri.go:96] found id: "354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09"
	I1225 19:05:11.224984  328997 cri.go:96] found id: "14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	I1225 19:05:11.224988  328997 cri.go:96] found id: "d0ee12735cd4db3a4f33b6c01940acfb704c79ae33d33dd565e52a63afdb2b14"
	I1225 19:05:11.224990  328997 cri.go:96] found id: ""
	I1225 19:05:11.225027  328997 ssh_runner.go:195] Run: sudo runc list -f json
	I1225 19:05:11.238363  328997 out.go:203] 
	W1225 19:05:11.239834  328997 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:05:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T19:05:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1225 19:05:11.239851  328997 out.go:285] * 
	* 
	W1225 19:05:11.241548  328997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1225 19:05:11.242754  328997 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-960022 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-960022
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-960022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f",
	        "Created": "2025-12-25T19:03:07.962087481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:04:11.623555976Z",
	            "FinishedAt": "2025-12-25T19:04:10.599028284Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/hosts",
	        "LogPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f-json.log",
	        "Name": "/default-k8s-diff-port-960022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-960022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-960022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f",
	                "LowerDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-960022",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-960022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-960022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-960022",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-960022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ebebc9af22b7525259a240328c212757ccc0bee502bb725cfaa662b5c90d4c9a",
	            "SandboxKey": "/var/run/docker/netns/ebebc9af22b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-960022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6496648f4bb9e6db2a787d51dc81aaa3ff1aaea70439b67d588aff1a80515c8b",
	                    "EndpointID": "93bf82f077409f18f03edfafd6ad776887e64929b2afc54fcdb9f7399cea1325",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "c6:bf:f2:cb:c6:ae",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-960022",
	                        "e715f5c007f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022: exit status 2 (339.621365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-960022 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-960022 logs -n 25: (1.205957238s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-910464 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl cat docker --no-pager                                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/docker/daemon.json                                                                                        │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo docker system info                                                                                                 │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cri-dockerd --version                                                                                              │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl cat containerd --no-pager                                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/containerd/config.toml                                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo containerd config dump                                                                                             │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl cat crio --no-pager                                                                                      │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo crio config                                                                                                        │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ delete  │ -p auto-910464                                                                                                                         │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ start   │ -p calico-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-910464                │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ image   │ default-k8s-diff-port-960022 image list --format=json                                                                                  │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:05 UTC │ 25 Dec 25 19:05 UTC │
	│ pause   │ -p default-k8s-diff-port-960022 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:04:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:04:52.951715  325002 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:04:52.952031  325002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:52.952043  325002 out.go:374] Setting ErrFile to fd 2...
	I1225 19:04:52.952049  325002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:52.952394  325002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:04:52.953046  325002 out.go:368] Setting JSON to false
	I1225 19:04:52.954234  325002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2841,"bootTime":1766686652,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:04:52.954301  325002 start.go:143] virtualization: kvm guest
	I1225 19:04:52.955712  325002 out.go:179] * [calico-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:04:52.957049  325002 notify.go:221] Checking for updates...
	I1225 19:04:52.957077  325002 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:04:52.958284  325002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:04:52.959695  325002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:52.960787  325002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:04:52.961823  325002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:04:52.962903  325002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:04:48.859576  316482 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 19:04:48.863516  316482 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1225 19:04:48.863530  316482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1225 19:04:48.876029  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 19:04:49.106442  316482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 19:04:49.106558  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:49.106557  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-910464 minikube.k8s.io/updated_at=2025_12_25T19_04_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef minikube.k8s.io/name=kindnet-910464 minikube.k8s.io/primary=true
	I1225 19:04:49.119914  316482 ops.go:34] apiserver oom_adj: -16
	I1225 19:04:49.223761  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:49.724563  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:50.223805  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:50.724009  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:51.224535  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:51.724730  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:52.224614  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:52.723856  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:52.964776  325002 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:52.964974  325002 config.go:182] Loaded profile config "kindnet-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:52.965115  325002 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:52.965257  325002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:04:52.996718  325002 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:04:52.996819  325002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:53.063037  325002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:04:53.05183533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:53.063137  325002 docker.go:319] overlay module found
	I1225 19:04:53.064924  325002 out.go:179] * Using the docker driver based on user configuration
	I1225 19:04:53.066228  325002 start.go:309] selected driver: docker
	I1225 19:04:53.066242  325002 start.go:928] validating driver "docker" against <nil>
	I1225 19:04:53.066257  325002 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:04:53.067027  325002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:53.129699  325002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:04:53.118804631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:53.129884  325002 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:04:53.130211  325002 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:04:53.131854  325002 out.go:179] * Using Docker driver with root privileges
	I1225 19:04:53.133643  325002 cni.go:84] Creating CNI manager for "calico"
	I1225 19:04:53.133671  325002 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1225 19:04:53.133751  325002 start.go:353] cluster config:
	{Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:53.135296  325002 out.go:179] * Starting "calico-910464" primary control-plane node in "calico-910464" cluster
	I1225 19:04:53.136546  325002 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:04:53.137951  325002 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:04:53.139149  325002 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:53.139197  325002 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:04:53.139212  325002 cache.go:65] Caching tarball of preloaded images
	I1225 19:04:53.139232  325002 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:04:53.139332  325002 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:04:53.139348  325002 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:04:53.139476  325002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/config.json ...
	I1225 19:04:53.139510  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/config.json: {Name:mk694e835f93aef7a3573ddd262d5970b3f92ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:53.164848  325002 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:04:53.164873  325002 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:04:53.164902  325002 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:04:53.164938  325002 start.go:360] acquireMachinesLock for calico-910464: {Name:mkc09de11839eab5406205339afa568256a29ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:04:53.165050  325002 start.go:364] duration metric: took 91.049µs to acquireMachinesLock for "calico-910464"
	I1225 19:04:53.165079  325002 start.go:93] Provisioning new machine with config: &{Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:53.165183  325002 start.go:125] createHost starting for "" (driver="docker")
	I1225 19:04:53.224502  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:53.724678  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:54.224013  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:54.304636  316482 kubeadm.go:1114] duration metric: took 5.198160942s to wait for elevateKubeSystemPrivileges
	I1225 19:04:54.304669  316482 kubeadm.go:403] duration metric: took 16.541963807s to StartCluster
	I1225 19:04:54.304689  316482 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:54.304763  316482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:54.305955  316482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:54.306228  316482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 19:04:54.306247  316482 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:04:54.306224  316482 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:54.306333  316482 addons.go:70] Setting storage-provisioner=true in profile "kindnet-910464"
	I1225 19:04:54.306352  316482 addons.go:239] Setting addon storage-provisioner=true in "kindnet-910464"
	I1225 19:04:54.306379  316482 host.go:66] Checking if "kindnet-910464" exists ...
	I1225 19:04:54.306383  316482 addons.go:70] Setting default-storageclass=true in profile "kindnet-910464"
	I1225 19:04:54.306494  316482 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-910464"
	I1225 19:04:54.306414  316482 config.go:182] Loaded profile config "kindnet-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:54.306878  316482 cli_runner.go:164] Run: docker container inspect kindnet-910464 --format={{.State.Status}}
	I1225 19:04:54.307071  316482 cli_runner.go:164] Run: docker container inspect kindnet-910464 --format={{.State.Status}}
	I1225 19:04:54.308749  316482 out.go:179] * Verifying Kubernetes components...
	I1225 19:04:54.309968  316482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:54.331153  316482 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:04:54.331760  316482 addons.go:239] Setting addon default-storageclass=true in "kindnet-910464"
	I1225 19:04:54.331805  316482 host.go:66] Checking if "kindnet-910464" exists ...
	I1225 19:04:54.332541  316482 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:54.332563  316482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:04:54.332582  316482 cli_runner.go:164] Run: docker container inspect kindnet-910464 --format={{.State.Status}}
	I1225 19:04:54.332617  316482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-910464
	I1225 19:04:54.366005  316482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/kindnet-910464/id_rsa Username:docker}
	I1225 19:04:54.366675  316482 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:54.366696  316482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:04:54.367514  316482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-910464
	I1225 19:04:54.392827  316482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/kindnet-910464/id_rsa Username:docker}
	I1225 19:04:54.414271  316482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 19:04:54.482956  316482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:54.486678  316482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:54.521144  316482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:54.649523  316482 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1225 19:04:54.651614  316482 node_ready.go:35] waiting up to 15m0s for node "kindnet-910464" to be "Ready" ...
	I1225 19:04:54.884597  316482 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1225 19:04:51.492224  310133 pod_ready.go:104] pod "coredns-66bc5c9577-c9wmz" is not "Ready", error: <nil>
	W1225 19:04:53.991719  310133 pod_ready.go:104] pod "coredns-66bc5c9577-c9wmz" is not "Ready", error: <nil>
	I1225 19:04:55.492201  310133 pod_ready.go:94] pod "coredns-66bc5c9577-c9wmz" is "Ready"
	I1225 19:04:55.492235  310133 pod_ready.go:86] duration metric: took 33.505935854s for pod "coredns-66bc5c9577-c9wmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.494805  310133 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.498814  310133 pod_ready.go:94] pod "etcd-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:55.498840  310133 pod_ready.go:86] duration metric: took 4.013465ms for pod "etcd-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.500633  310133 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.504591  310133 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:55.504618  310133 pod_ready.go:86] duration metric: took 3.959115ms for pod "kube-apiserver-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.506601  310133 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.690863  310133 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:55.690919  310133 pod_ready.go:86] duration metric: took 184.292647ms for pod "kube-controller-manager-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.890498  310133 pod_ready.go:83] waiting for pod "kube-proxy-wl784" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.290458  310133 pod_ready.go:94] pod "kube-proxy-wl784" is "Ready"
	I1225 19:04:56.290485  310133 pod_ready.go:86] duration metric: took 399.960959ms for pod "kube-proxy-wl784" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.490461  310133 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.890596  310133 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:56.890633  310133 pod_ready.go:86] duration metric: took 400.146786ms for pod "kube-scheduler-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.890645  310133 pod_ready.go:40] duration metric: took 34.908194557s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:04:56.933756  310133 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:04:56.943127  310133 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-960022" cluster and "default" namespace by default
	I1225 19:04:52.591910  260034 cri.go:96] found id: ""
	I1225 19:04:52.591937  260034 logs.go:282] 0 containers: []
	W1225 19:04:52.591945  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:52.591951  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:52.592015  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:52.619687  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:52.619713  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:52.619719  260034 cri.go:96] found id: ""
	I1225 19:04:52.619728  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:52.619788  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:52.623822  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:52.627474  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:52.627535  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:52.668060  260034 cri.go:96] found id: ""
	I1225 19:04:52.668097  260034 logs.go:282] 0 containers: []
	W1225 19:04:52.668109  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:52.668116  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:52.668183  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:52.701512  260034 cri.go:96] found id: ""
	I1225 19:04:52.701539  260034 logs.go:282] 0 containers: []
	W1225 19:04:52.701549  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:52.701561  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:52.701583  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:52.732654  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:52.732681  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:52.798236  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:52.798278  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:52.834440  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:52.834472  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:52.929537  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:52.929566  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:52.946267  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:52.946298  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:04:53.013992  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:04:53.014014  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:53.014029  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:53.058130  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:53.058171  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:53.096147  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:53.096182  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:53.127705  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:53.127738  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:53.165476  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:53.165502  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:53.194090  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:04:53.194115  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:55.725039  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:04:55.725481  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:04:55.725536  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:04:55.725592  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:04:55.757008  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:55.757032  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:55.757037  260034 cri.go:96] found id: ""
	I1225 19:04:55.757045  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:04:55.757090  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.761908  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.765725  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:04:55.765776  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:04:55.794865  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:55.794889  260034 cri.go:96] found id: ""
	I1225 19:04:55.794919  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:04:55.794988  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.799014  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:04:55.799077  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:04:55.827783  260034 cri.go:96] found id: ""
	I1225 19:04:55.827807  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.827815  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:04:55.827823  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:04:55.827873  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:04:55.858521  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:55.858550  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:55.858558  260034 cri.go:96] found id: ""
	I1225 19:04:55.858569  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:04:55.858628  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.862646  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.866385  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:04:55.866446  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:04:55.894107  260034 cri.go:96] found id: ""
	I1225 19:04:55.894128  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.894136  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:55.894142  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:55.894188  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:55.924140  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:55.924163  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:55.924167  260034 cri.go:96] found id: ""
	I1225 19:04:55.924174  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:55.924221  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.928308  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.931971  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:55.932033  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:55.961310  260034 cri.go:96] found id: ""
	I1225 19:04:55.961337  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.961350  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:55.961357  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:55.961420  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:55.990145  260034 cri.go:96] found id: ""
	I1225 19:04:55.990173  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.990186  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:55.990197  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:55.990210  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:56.084901  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:56.084940  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:56.130778  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:56.130811  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:56.161334  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:04:56.161364  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:56.190314  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:56.190344  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:56.221309  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:56.221333  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:56.278334  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:56.278365  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:56.293021  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:56.293044  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:04:56.350363  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:04:56.350388  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:56.350407  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:56.380370  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:56.380398  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:56.412970  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:56.412997  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:56.441800  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:56.441826  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:53.167769  325002 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:04:53.168069  325002 start.go:159] libmachine.API.Create for "calico-910464" (driver="docker")
	I1225 19:04:53.168105  325002 client.go:173] LocalClient.Create starting
	I1225 19:04:53.168189  325002 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:04:53.168233  325002 main.go:144] libmachine: Decoding PEM data...
	I1225 19:04:53.168262  325002 main.go:144] libmachine: Parsing certificate...
	I1225 19:04:53.168339  325002 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:04:53.168370  325002 main.go:144] libmachine: Decoding PEM data...
	I1225 19:04:53.168390  325002 main.go:144] libmachine: Parsing certificate...
	I1225 19:04:53.168748  325002 cli_runner.go:164] Run: docker network inspect calico-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:04:53.189582  325002 cli_runner.go:211] docker network inspect calico-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:04:53.189673  325002 network_create.go:284] running [docker network inspect calico-910464] to gather additional debugging logs...
	I1225 19:04:53.189715  325002 cli_runner.go:164] Run: docker network inspect calico-910464
	W1225 19:04:53.208205  325002 cli_runner.go:211] docker network inspect calico-910464 returned with exit code 1
	I1225 19:04:53.208238  325002 network_create.go:287] error running [docker network inspect calico-910464]: docker network inspect calico-910464: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-910464 not found
	I1225 19:04:53.208250  325002 network_create.go:289] output of [docker network inspect calico-910464]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-910464 not found
	
	** /stderr **
	I1225 19:04:53.208332  325002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:04:53.229548  325002 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:04:53.230572  325002 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:04:53.231704  325002 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:04:53.232803  325002 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb83f0}
	I1225 19:04:53.232840  325002 network_create.go:124] attempt to create docker network calico-910464 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1225 19:04:53.232913  325002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-910464 calico-910464
	I1225 19:04:53.285417  325002 network_create.go:108] docker network calico-910464 192.168.76.0/24 created
	I1225 19:04:53.285452  325002 kic.go:121] calculated static IP "192.168.76.2" for the "calico-910464" container
	I1225 19:04:53.285538  325002 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:04:53.305038  325002 cli_runner.go:164] Run: docker volume create calico-910464 --label name.minikube.sigs.k8s.io=calico-910464 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:04:53.323566  325002 oci.go:103] Successfully created a docker volume calico-910464
	I1225 19:04:53.323651  325002 cli_runner.go:164] Run: docker run --rm --name calico-910464-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-910464 --entrypoint /usr/bin/test -v calico-910464:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:04:53.748243  325002 oci.go:107] Successfully prepared a docker volume calico-910464
	I1225 19:04:53.748303  325002 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:53.748315  325002 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:04:53.748378  325002 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:04:54.889023  316482 addons.go:530] duration metric: took 582.767296ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1225 19:04:55.154844  316482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-910464" context rescaled to 1 replicas
	W1225 19:04:56.654343  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	I1225 19:04:58.212735  325002 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.464313069s)
	I1225 19:04:58.212784  325002 kic.go:203] duration metric: took 4.464465034s to extract preloaded images to volume ...
	W1225 19:04:58.212874  325002 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:04:58.212928  325002 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:04:58.212982  325002 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:04:58.266410  325002 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-910464 --name calico-910464 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-910464 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-910464 --network calico-910464 --ip 192.168.76.2 --volume calico-910464:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 19:04:58.534175  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Running}}
	I1225 19:04:58.552005  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Status}}
	I1225 19:04:58.570282  325002 cli_runner.go:164] Run: docker exec calico-910464 stat /var/lib/dpkg/alternatives/iptables
	I1225 19:04:58.618195  325002 oci.go:144] the created container "calico-910464" has a running status.
	I1225 19:04:58.618234  325002 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa...
	I1225 19:04:58.685066  325002 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1225 19:04:58.710001  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Status}}
	I1225 19:04:58.733851  325002 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1225 19:04:58.733877  325002 kic_runner.go:114] Args: [docker exec --privileged calico-910464 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1225 19:04:58.780749  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Status}}
	I1225 19:04:58.806445  325002 machine.go:94] provisionDockerMachine start ...
	I1225 19:04:58.806538  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:58.828831  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:58.829302  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:58.829322  325002 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:04:58.962762  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: calico-910464
	
	I1225 19:04:58.962789  325002 ubuntu.go:182] provisioning hostname "calico-910464"
	I1225 19:04:58.962855  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:58.983861  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:58.984191  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:58.984212  325002 main.go:144] libmachine: About to run SSH command:
	sudo hostname calico-910464 && echo "calico-910464" | sudo tee /etc/hostname
	I1225 19:04:59.122249  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: calico-910464
	
	I1225 19:04:59.122347  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:59.144627  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:59.144866  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:59.144884  325002 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-910464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-910464/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-910464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:04:59.271213  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:04:59.271250  325002 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:04:59.271310  325002 ubuntu.go:190] setting up certificates
	I1225 19:04:59.271328  325002 provision.go:84] configureAuth start
	I1225 19:04:59.271397  325002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-910464
	I1225 19:04:59.289996  325002 provision.go:143] copyHostCerts
	I1225 19:04:59.290050  325002 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:04:59.290058  325002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:04:59.290124  325002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:04:59.290231  325002 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:04:59.290243  325002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:04:59.290287  325002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:04:59.290397  325002 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:04:59.290410  325002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:04:59.290448  325002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:04:59.290543  325002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.calico-910464 san=[127.0.0.1 192.168.76.2 calico-910464 localhost minikube]
	I1225 19:04:59.635604  325002 provision.go:177] copyRemoteCerts
	I1225 19:04:59.635662  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:04:59.635697  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:59.654388  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:04:59.748267  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:04:59.767548  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1225 19:04:59.784667  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:04:59.802481  325002 provision.go:87] duration metric: took 531.133852ms to configureAuth
	I1225 19:04:59.802507  325002 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:04:59.802670  325002 config.go:182] Loaded profile config "calico-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:59.802778  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:59.824163  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:59.824410  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:59.824435  325002 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:05:00.083498  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:05:00.083525  325002 machine.go:97] duration metric: took 1.277055226s to provisionDockerMachine
	I1225 19:05:00.083536  325002 client.go:176] duration metric: took 6.915424073s to LocalClient.Create
	I1225 19:05:00.083555  325002 start.go:167] duration metric: took 6.915485451s to libmachine.API.Create "calico-910464"
	I1225 19:05:00.083564  325002 start.go:293] postStartSetup for "calico-910464" (driver="docker")
	I1225 19:05:00.083578  325002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:05:00.083635  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:05:00.083672  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.102133  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.195425  325002 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:05:00.198888  325002 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:05:00.198947  325002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:05:00.198961  325002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:05:00.199015  325002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:05:00.199086  325002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:05:00.199171  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:05:00.207000  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:05:00.227005  325002 start.go:296] duration metric: took 143.424254ms for postStartSetup
	I1225 19:05:00.227302  325002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-910464
	I1225 19:05:00.245273  325002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/config.json ...
	I1225 19:05:00.245527  325002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:05:00.245565  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.264074  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.352294  325002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:05:00.357175  325002 start.go:128] duration metric: took 7.191978561s to createHost
	I1225 19:05:00.357200  325002 start.go:83] releasing machines lock for "calico-910464", held for 7.192138157s
	I1225 19:05:00.357260  325002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-910464
	I1225 19:05:00.375679  325002 ssh_runner.go:195] Run: cat /version.json
	I1225 19:05:00.375733  325002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:05:00.375789  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.375736  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.396793  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.396995  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.540046  325002 ssh_runner.go:195] Run: systemctl --version
	I1225 19:05:00.546611  325002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:05:00.581834  325002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:05:00.586641  325002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:05:00.586711  325002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:05:00.613728  325002 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 19:05:00.613751  325002 start.go:496] detecting cgroup driver to use...
	I1225 19:05:00.613787  325002 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:05:00.613830  325002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:05:00.629644  325002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:05:00.642001  325002 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:05:00.642056  325002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:05:00.659490  325002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:05:00.676634  325002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:05:00.761907  325002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:05:00.850355  325002 docker.go:234] disabling docker service ...
	I1225 19:05:00.850416  325002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:05:00.868752  325002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:05:00.881414  325002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:05:00.964238  325002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:05:01.049877  325002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:05:01.062744  325002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:05:01.076735  325002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:05:01.076786  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.086586  325002 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:05:01.086639  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.095654  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.104978  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.114635  325002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:05:01.123768  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.133113  325002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.147989  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.158491  325002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:05:01.166169  325002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:05:01.173623  325002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:05:01.253979  325002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:05:01.380441  325002 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:05:01.380511  325002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:05:01.384524  325002 start.go:574] Will wait 60s for crictl version
	I1225 19:05:01.384586  325002 ssh_runner.go:195] Run: which crictl
	I1225 19:05:01.388229  325002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:05:01.413148  325002 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:05:01.413225  325002 ssh_runner.go:195] Run: crio --version
	I1225 19:05:01.440807  325002 ssh_runner.go:195] Run: crio --version
	I1225 19:05:01.469406  325002 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1225 19:04:58.972954  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:04:58.973362  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:04:58.973415  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:04:58.973466  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:04:59.003827  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:59.003850  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:59.003856  260034 cri.go:96] found id: ""
	I1225 19:04:59.003865  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:04:59.003929  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.008142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.011923  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:04:59.011990  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:04:59.040983  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:59.041005  260034 cri.go:96] found id: ""
	I1225 19:04:59.041013  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:04:59.041068  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.045125  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:04:59.045205  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:04:59.072631  260034 cri.go:96] found id: ""
	I1225 19:04:59.072659  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.072670  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:04:59.072678  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:04:59.072730  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:04:59.099249  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:59.099271  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:59.099278  260034 cri.go:96] found id: ""
	I1225 19:04:59.099287  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:04:59.099347  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.103348  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.107017  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:04:59.107078  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:04:59.139040  260034 cri.go:96] found id: ""
	I1225 19:04:59.139069  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.139081  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:59.139088  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:59.139145  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:59.167758  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:59.167780  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:59.167787  260034 cri.go:96] found id: ""
	I1225 19:04:59.167796  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:59.167871  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.171879  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.175852  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:59.175935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:59.206028  260034 cri.go:96] found id: ""
	I1225 19:04:59.206051  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.206060  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:59.206065  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:59.206112  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:59.234022  260034 cri.go:96] found id: ""
	I1225 19:04:59.234047  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.234055  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:59.234064  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:59.234077  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:59.259332  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:59.259357  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:59.316762  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:59.316794  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:59.403475  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:59.403505  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:04:59.460425  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:04:59.460456  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:59.460474  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:59.491507  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:59.491536  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:59.526042  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:59.526071  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:59.561533  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:04:59.561565  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:59.587730  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:59.587758  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:59.619554  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:59.619578  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:59.632881  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:59.632926  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:59.661634  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:59.661655  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:02.189961  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:05:02.190386  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:05:02.190430  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:05:02.190481  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:05:02.219121  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:02.219139  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:02.219143  260034 cri.go:96] found id: ""
	I1225 19:05:02.219151  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:05:02.219192  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.223013  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.226952  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:05:02.227007  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:05:02.255257  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:02.255281  260034 cri.go:96] found id: ""
	I1225 19:05:02.255291  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:05:02.255354  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.259448  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:05:02.259503  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:05:02.285751  260034 cri.go:96] found id: ""
	I1225 19:05:02.285778  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.285789  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:05:02.285800  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:05:02.285856  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:05:02.313754  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:02.313777  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:02.313784  260034 cri.go:96] found id: ""
	I1225 19:05:02.313794  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:05:02.313847  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.318213  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.322440  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:05:02.322493  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:05:02.349734  260034 cri.go:96] found id: ""
	I1225 19:05:02.349756  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.349765  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:05:02.349771  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:05:02.349828  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:05:02.377326  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:02.377347  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:02.377352  260034 cri.go:96] found id: ""
	I1225 19:05:02.377361  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:05:02.377416  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.381402  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.385131  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:05:02.385195  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:05:02.413654  260034 cri.go:96] found id: ""
	I1225 19:05:02.413677  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.413685  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:05:02.413690  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:05:02.413740  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:05:02.441502  260034 cri.go:96] found id: ""
	I1225 19:05:02.441523  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.441532  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:05:02.441539  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:05:02.441549  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:05:02.498220  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:05:02.498247  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:05:02.528748  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:05:02.528783  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:05:01.470755  325002 cli_runner.go:164] Run: docker network inspect calico-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:05:01.487991  325002 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1225 19:05:01.492207  325002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:05:01.502874  325002 kubeadm.go:884] updating cluster {Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:05:01.503024  325002 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:05:01.503069  325002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:05:01.535512  325002 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:05:01.535530  325002 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:05:01.535573  325002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:05:01.560536  325002 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:05:01.560557  325002 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:05:01.560563  325002 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1225 19:05:01.560644  325002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-910464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1225 19:05:01.560703  325002 ssh_runner.go:195] Run: crio config
	I1225 19:05:01.607803  325002 cni.go:84] Creating CNI manager for "calico"
	I1225 19:05:01.607828  325002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:05:01.607849  325002 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-910464 NodeName:calico-910464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:05:01.607982  325002 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-910464"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:05:01.608042  325002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:05:01.616213  325002 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:05:01.616280  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:05:01.623810  325002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1225 19:05:01.636762  325002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:05:01.651517  325002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1225 19:05:01.664867  325002 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:05:01.668603  325002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:05:01.678642  325002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:05:01.763619  325002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:05:01.786759  325002 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464 for IP: 192.168.76.2
	I1225 19:05:01.786781  325002 certs.go:195] generating shared ca certs ...
	I1225 19:05:01.786796  325002 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:01.786987  325002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:05:01.787057  325002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:05:01.787076  325002 certs.go:257] generating profile certs ...
	I1225 19:05:01.787160  325002 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.key
	I1225 19:05:01.787182  325002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.crt with IP's: []
	I1225 19:05:01.882126  325002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.crt ...
	I1225 19:05:01.882154  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.crt: {Name:mk4a61814cd88fa168d655fbd09949c88a89e8be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:01.882359  325002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.key ...
	I1225 19:05:01.882377  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.key: {Name:mk12b74d72a24bef28d951f4c17d80affedb5701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:01.882486  325002 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0
	I1225 19:05:01.882502  325002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1225 19:05:02.040289  325002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0 ...
	I1225 19:05:02.040330  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0: {Name:mkc8e32a96a4cc7aa0bb8b50086bc36890de6d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.040541  325002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0 ...
	I1225 19:05:02.040566  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0: {Name:mk38b5fc28f106c9b8ee129efce965b268b814a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.040678  325002 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0 -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt
	I1225 19:05:02.040803  325002 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0 -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key
	I1225 19:05:02.040906  325002 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key
	I1225 19:05:02.040928  325002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt with IP's: []
	I1225 19:05:02.090477  325002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt ...
	I1225 19:05:02.090505  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt: {Name:mk8f56ef8b761215e363a7f8cb18b671b8bed273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.090662  325002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key ...
	I1225 19:05:02.090672  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key: {Name:mk9eeb3e5925dd6bc2d6ddc251f2048fad80b60f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.090848  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:05:02.090890  325002 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:05:02.090919  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:05:02.090958  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:05:02.090983  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:05:02.091008  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:05:02.091051  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:05:02.091611  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:05:02.110093  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:05:02.127266  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:05:02.144416  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:05:02.162174  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1225 19:05:02.178839  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:05:02.196468  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:05:02.216496  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:05:02.235498  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:05:02.256575  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:05:02.274289  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:05:02.292749  325002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:05:02.305357  325002 ssh_runner.go:195] Run: openssl version
	I1225 19:05:02.312461  325002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.320280  325002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:05:02.327649  325002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.331410  325002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.331458  325002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.369377  325002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:05:02.378965  325002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 19:05:02.387311  325002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.394711  325002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:05:02.402883  325002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.407060  325002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.407118  325002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.446705  325002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:05:02.455005  325002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9112.pem /etc/ssl/certs/51391683.0
	I1225 19:05:02.463143  325002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.470457  325002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:05:02.477443  325002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.480843  325002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.480925  325002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.515635  325002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:05:02.523770  325002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91122.pem /etc/ssl/certs/3ec20f2e.0
	I1225 19:05:02.532069  325002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:05:02.535588  325002 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 19:05:02.535641  325002 kubeadm.go:401] StartCluster: {Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:05:02.535722  325002 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:05:02.535770  325002 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:05:02.563562  325002 cri.go:96] found id: ""
	I1225 19:05:02.563626  325002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:05:02.571750  325002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:05:02.579570  325002 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 19:05:02.579625  325002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:05:02.588271  325002 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 19:05:02.588298  325002 kubeadm.go:158] found existing configuration files:
	
	I1225 19:05:02.588345  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1225 19:05:02.596007  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 19:05:02.596059  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 19:05:02.604131  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1225 19:05:02.612070  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 19:05:02.612126  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 19:05:02.620108  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1225 19:05:02.627999  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 19:05:02.628086  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:05:02.635387  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1225 19:05:02.643359  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 19:05:02.643435  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:05:02.651154  325002 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 19:05:02.693479  325002 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1225 19:05:02.693570  325002 kubeadm.go:319] [preflight] Running pre-flight checks
	I1225 19:05:02.715933  325002 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1225 19:05:02.716012  325002 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1225 19:05:02.716059  325002 kubeadm.go:319] OS: Linux
	I1225 19:05:02.716114  325002 kubeadm.go:319] CGROUPS_CPU: enabled
	I1225 19:05:02.716179  325002 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1225 19:05:02.716282  325002 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1225 19:05:02.716378  325002 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1225 19:05:02.716450  325002 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1225 19:05:02.716512  325002 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1225 19:05:02.716589  325002 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1225 19:05:02.716652  325002 kubeadm.go:319] CGROUPS_IO: enabled
	I1225 19:05:02.780358  325002 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 19:05:02.780521  325002 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 19:05:02.780670  325002 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1225 19:05:02.788372  325002 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 19:05:02.790412  325002 out.go:252]   - Generating certificates and keys ...
	I1225 19:05:02.790512  325002 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1225 19:05:02.790721  325002 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1225 19:05:02.947659  325002 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1225 19:04:58.656220  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	W1225 19:05:01.155428  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	I1225 19:05:02.994997  325002 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1225 19:05:03.412611  325002 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1225 19:05:03.699499  325002 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1225 19:05:03.827390  325002 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1225 19:05:03.827584  325002 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-910464 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1225 19:05:04.081986  325002 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1225 19:05:04.082169  325002 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-910464 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1225 19:05:04.525525  325002 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 19:05:04.774481  325002 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 19:05:05.330116  325002 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1225 19:05:05.330210  325002 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 19:05:05.433744  325002 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 19:05:05.604511  325002 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1225 19:05:05.673164  325002 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 19:05:05.788680  325002 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 19:05:06.229122  325002 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 19:05:06.229714  325002 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 19:05:06.233192  325002 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 19:05:02.624081  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:05:02.624106  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:05:02.638134  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:05:02.638164  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:02.676056  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:05:02.676084  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:02.703746  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:05:02.703770  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:05:02.764512  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:05:02.764529  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:05:02.764540  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:02.798735  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:05:02.798781  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:02.832062  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:05:02.832088  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:02.860169  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:05:02.860199  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:02.888930  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:05:02.888954  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:05.421456  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:05:05.421907  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:05:05.421962  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:05:05.422013  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:05:05.449968  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:05.449993  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:05.449999  260034 cri.go:96] found id: ""
	I1225 19:05:05.450008  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:05:05.450073  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.454102  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.458255  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:05:05.458313  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:05:05.487016  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:05.487039  260034 cri.go:96] found id: ""
	I1225 19:05:05.487047  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:05:05.487101  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.490933  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:05:05.491015  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:05:05.517387  260034 cri.go:96] found id: ""
	I1225 19:05:05.517414  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.517425  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:05:05.517432  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:05:05.517489  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:05:05.543076  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:05.543100  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:05.543106  260034 cri.go:96] found id: ""
	I1225 19:05:05.543114  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:05:05.543168  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.546886  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.550425  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:05:05.550481  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:05:05.578265  260034 cri.go:96] found id: ""
	I1225 19:05:05.578288  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.578299  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:05:05.578305  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:05:05.578355  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:05:05.607428  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:05.607451  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:05.607457  260034 cri.go:96] found id: ""
	I1225 19:05:05.607466  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:05:05.607524  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.611800  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.616781  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:05:05.616839  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:05:05.645135  260034 cri.go:96] found id: ""
	I1225 19:05:05.645161  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.645172  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:05:05.645179  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:05:05.645233  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:05:05.673164  260034 cri.go:96] found id: ""
	I1225 19:05:05.673191  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.673202  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:05:05.673212  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:05:05.673226  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:05.701077  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:05:05.701102  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:05.728588  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:05:05.728616  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:05:05.761067  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:05:05.761092  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:05:05.816495  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:05:05.816516  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:05:05.816530  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:05.851176  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:05:05.851203  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:05.877591  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:05:05.877615  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:05.908077  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:05:05.908102  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:05:05.975459  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:05:05.975500  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:05:06.065372  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:05:06.065407  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:05:06.079391  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:05:06.079424  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:06.109804  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:05:06.109832  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:06.234754  325002 out.go:252]   - Booting up control plane ...
	I1225 19:05:06.234871  325002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 19:05:06.234995  325002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 19:05:06.235643  325002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 19:05:06.261721  325002 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 19:05:06.261856  325002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 19:05:06.268256  325002 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 19:05:06.268572  325002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 19:05:06.268617  325002 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 19:05:06.372958  325002 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 19:05:06.373126  325002 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 19:05:07.373907  325002 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001074188s
	I1225 19:05:07.378656  325002 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 19:05:07.378807  325002 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1225 19:05:07.378978  325002 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 19:05:07.379091  325002 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1225 19:05:03.654834  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	W1225 19:05:05.655169  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	I1225 19:05:06.654182  316482 node_ready.go:49] node "kindnet-910464" is "Ready"
	I1225 19:05:06.654221  316482 node_ready.go:38] duration metric: took 12.002570095s for node "kindnet-910464" to be "Ready" ...
	I1225 19:05:06.654234  316482 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:05:06.654281  316482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:05:06.668405  316482 api_server.go:72] duration metric: took 12.362051478s to wait for apiserver process to appear ...
	I1225 19:05:06.668436  316482 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:05:06.668456  316482 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:05:06.673688  316482 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:05:06.674775  316482 api_server.go:141] control plane version: v1.34.3
	I1225 19:05:06.674809  316482 api_server.go:131] duration metric: took 6.366234ms to wait for apiserver health ...
	I1225 19:05:06.674820  316482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:05:06.678651  316482 system_pods.go:59] 8 kube-system pods found
	I1225 19:05:06.678688  316482 system_pods.go:61] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:05:06.678696  316482 system_pods.go:61] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:06.678709  316482 system_pods.go:61] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:06.678715  316482 system_pods.go:61] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:06.678723  316482 system_pods.go:61] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:06.678729  316482 system_pods.go:61] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:06.678733  316482 system_pods.go:61] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:06.678741  316482 system_pods.go:61] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:05:06.678753  316482 system_pods.go:74] duration metric: took 3.926483ms to wait for pod list to return data ...
	I1225 19:05:06.678766  316482 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:05:06.681244  316482 default_sa.go:45] found service account: "default"
	I1225 19:05:06.681262  316482 default_sa.go:55] duration metric: took 2.48959ms for default service account to be created ...
	I1225 19:05:06.681271  316482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:05:06.684492  316482 system_pods.go:86] 8 kube-system pods found
	I1225 19:05:06.684517  316482 system_pods.go:89] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:05:06.684524  316482 system_pods.go:89] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:06.684534  316482 system_pods.go:89] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:06.684540  316482 system_pods.go:89] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:06.684556  316482 system_pods.go:89] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:06.684566  316482 system_pods.go:89] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:06.684571  316482 system_pods.go:89] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:06.684583  316482 system_pods.go:89] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:05:06.684607  316482 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1225 19:05:06.957549  316482 system_pods.go:86] 8 kube-system pods found
	I1225 19:05:06.957581  316482 system_pods.go:89] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:05:06.957587  316482 system_pods.go:89] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:06.957593  316482 system_pods.go:89] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:06.957596  316482 system_pods.go:89] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:06.957600  316482 system_pods.go:89] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:06.957604  316482 system_pods.go:89] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:06.957607  316482 system_pods.go:89] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:06.957611  316482 system_pods.go:89] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:05:07.308317  316482 system_pods.go:86] 8 kube-system pods found
	I1225 19:05:07.308359  316482 system_pods.go:89] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Running
	I1225 19:05:07.308366  316482 system_pods.go:89] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:07.308370  316482 system_pods.go:89] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:07.308373  316482 system_pods.go:89] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:07.308377  316482 system_pods.go:89] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:07.308388  316482 system_pods.go:89] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:07.308394  316482 system_pods.go:89] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:07.308402  316482 system_pods.go:89] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Running
	I1225 19:05:07.308413  316482 system_pods.go:126] duration metric: took 627.134993ms to wait for k8s-apps to be running ...
	I1225 19:05:07.308426  316482 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:05:07.308486  316482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:05:07.322838  316482 system_svc.go:56] duration metric: took 14.400665ms WaitForService to wait for kubelet
	I1225 19:05:07.322879  316482 kubeadm.go:587] duration metric: took 13.016528928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:05:07.322920  316482 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:05:07.325830  316482 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:05:07.325860  316482 node_conditions.go:123] node cpu capacity is 8
	I1225 19:05:07.325878  316482 node_conditions.go:105] duration metric: took 2.952951ms to run NodePressure ...
	I1225 19:05:07.325975  316482 start.go:242] waiting for startup goroutines ...
	I1225 19:05:07.325993  316482 start.go:247] waiting for cluster config update ...
	I1225 19:05:07.326020  316482 start.go:256] writing updated cluster config ...
	I1225 19:05:07.326321  316482 ssh_runner.go:195] Run: rm -f paused
	I1225 19:05:07.331140  316482 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:05:07.408468  316482 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f9kkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.414171  316482 pod_ready.go:94] pod "coredns-66bc5c9577-f9kkb" is "Ready"
	I1225 19:05:07.414198  316482 pod_ready.go:86] duration metric: took 5.709581ms for pod "coredns-66bc5c9577-f9kkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.416996  316482 pod_ready.go:83] waiting for pod "etcd-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.422376  316482 pod_ready.go:94] pod "etcd-kindnet-910464" is "Ready"
	I1225 19:05:07.422403  316482 pod_ready.go:86] duration metric: took 5.382037ms for pod "etcd-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.425728  316482 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.430640  316482 pod_ready.go:94] pod "kube-apiserver-kindnet-910464" is "Ready"
	I1225 19:05:07.430666  316482 pod_ready.go:86] duration metric: took 4.912231ms for pod "kube-apiserver-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.432824  316482 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.735441  316482 pod_ready.go:94] pod "kube-controller-manager-kindnet-910464" is "Ready"
	I1225 19:05:07.735472  316482 pod_ready.go:86] duration metric: took 302.626998ms for pod "kube-controller-manager-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.936111  316482 pod_ready.go:83] waiting for pod "kube-proxy-xd9t4" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.335709  316482 pod_ready.go:94] pod "kube-proxy-xd9t4" is "Ready"
	I1225 19:05:08.335736  316482 pod_ready.go:86] duration metric: took 399.595303ms for pod "kube-proxy-xd9t4" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.536161  316482 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.935954  316482 pod_ready.go:94] pod "kube-scheduler-kindnet-910464" is "Ready"
	I1225 19:05:08.935979  316482 pod_ready.go:86] duration metric: took 399.794429ms for pod "kube-scheduler-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.935991  316482 pod_ready.go:40] duration metric: took 1.604819617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:05:08.995638  316482 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:05:08.997529  316482 out.go:179] * Done! kubectl is now configured to use "kindnet-910464" cluster and "default" namespace by default
	I1225 19:05:09.186587  325002 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.808154935s
	I1225 19:05:09.491937  325002 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.113765437s
	I1225 19:05:11.380765  325002 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002414344s
	I1225 19:05:11.399303  325002 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 19:05:11.413223  325002 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 19:05:11.424642  325002 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 19:05:11.424981  325002 kubeadm.go:319] [mark-control-plane] Marking the node calico-910464 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 19:05:11.433373  325002 kubeadm.go:319] [bootstrap-token] Using token: l3otb8.dp8zrsjgr44c03sh
	
	
	==> CRI-O <==
	Dec 25 19:04:31 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:31.858930873Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 25 19:04:31 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:31.863468215Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 25 19:04:31 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:31.863493135Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.98294549Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0d8ae04a-b627-4126-b021-5dee5acaf8b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.983937595Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c7cabc0-c3d9-4647-adf4-25db9d98d3d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.984921427Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper" id=f0520f2c-98e7-46de-96d8-2d78549af1e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.985053716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.992409317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.993066498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.022634972Z" level=info msg="Created container 14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper" id=f0520f2c-98e7-46de-96d8-2d78549af1e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.023315184Z" level=info msg="Starting container: 14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b" id=de845614-fadb-4c4d-bb2a-93156ccfdefd name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.02542031Z" level=info msg="Started container" PID=1773 containerID=14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper id=de845614-fadb-4c4d-bb2a-93156ccfdefd name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c71f75e9ba768a31e50c04d7264137071a6fdc51a04829ee5f6edd298136368
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.120261478Z" level=info msg="Removing container: 7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967" id=000546b9-cc1b-4715-81b6-dd583d66c824 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.13159481Z" level=info msg="Removed container 7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper" id=000546b9-cc1b-4715-81b6-dd583d66c824 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.127459293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=de115b99-486d-47ac-ad6b-c0eadf05bd4f name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.128438679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b7af1123-cde0-403e-85bf-b0dceb45cb80 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.12948955Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6c4c598c-21a2-4b19-82c2-98caa6d81180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.129658752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134250978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134445288Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c47f9b5d14ed478d0ec191434d9b6cbe000d036a33ad3c4cf87b48b046b61fc5/merged/etc/passwd: no such file or directory"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134480243Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c47f9b5d14ed478d0ec191434d9b6cbe000d036a33ad3c4cf87b48b046b61fc5/merged/etc/group: no such file or directory"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134770535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.159042705Z" level=info msg="Created container 5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb: kube-system/storage-provisioner/storage-provisioner" id=6c4c598c-21a2-4b19-82c2-98caa6d81180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.159650813Z" level=info msg="Starting container: 5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb" id=5c409d8a-2413-4aaa-b2cb-19a709075074 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.163468283Z" level=info msg="Started container" PID=1787 containerID=5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb description=kube-system/storage-provisioner/storage-provisioner id=5c409d8a-2413-4aaa-b2cb-19a709075074 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4597b0b7c5d1031b163b16c345ed41795d846297c62fdd6ada00ab9be2830ac
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	5649c03c0aa63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   a4597b0b7c5d1       storage-provisioner                                    kube-system
	14c27e56e2876       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   0c71f75e9ba76       dashboard-metrics-scraper-6ffb444bf9-fphlq             kubernetes-dashboard
	d0ee12735cd4d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   486340be868d4       kubernetes-dashboard-855c9754f9-hm5lx                  kubernetes-dashboard
	fdbf81a94147e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   e951b8833a218       coredns-66bc5c9577-c9wmz                               kube-system
	1ee01c76421d4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   27464547da776       busybox                                                default
	f2ca16d825df4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   a4597b0b7c5d1       storage-provisioner                                    kube-system
	3aa3159c3178d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   dae936be19434       kindnet-hj6rr                                          kube-system
	132f0bde2b6bf       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           50 seconds ago      Running             kube-proxy                  0                   09ccaea963719       kube-proxy-wl784                                       kube-system
	deb534fd994d4       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           53 seconds ago      Running             kube-apiserver              0                   db3f2f5486cb2       kube-apiserver-default-k8s-diff-port-960022            kube-system
	d7afd3e6efe6f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   402df0c317d41       etcd-default-k8s-diff-port-960022                      kube-system
	e331a83a17cd9       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           53 seconds ago      Running             kube-controller-manager     0                   481506cdc0bf4       kube-controller-manager-default-k8s-diff-port-960022   kube-system
	354a51e629671       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           53 seconds ago      Running             kube-scheduler              0                   13ad13bab9fc4       kube-scheduler-default-k8s-diff-port-960022            kube-system
	
	
	==> coredns [fdbf81a94147e6e035a27f9d8d605db6a96cbbbddbd65b9f768e335d836bedb5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35806 - 63954 "HINFO IN 8877040098496447306.5506639103965423215. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01950143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-960022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-960022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=default-k8s-diff-port-960022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_03_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:03:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-960022
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-960022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                66f57d40-b312-40d1-9a39-442700171c0b
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-c9wmz                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-960022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-hj6rr                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-960022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-960022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-wl784                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-960022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fphlq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hm5lx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-960022 event: Registered Node default-k8s-diff-port-960022 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-960022 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-960022 event: Registered Node default-k8s-diff-port-960022 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee] <==
	{"level":"warn","ts":"2025-12-25T19:04:19.667036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.675073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.682935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.692077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.702220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.709275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.716597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.724125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.730218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.737016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.743780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.750189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.757157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.764733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.771140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.777622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.785167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.791698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.798936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.810458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.816822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.823680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.871357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58984","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-25T19:04:31.821468Z","caller":"traceutil/trace.go:172","msg":"trace[744845134] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"124.114112ms","start":"2025-12-25T19:04:31.697334Z","end":"2025-12-25T19:04:31.821448Z","steps":["trace[744845134] 'process raft request'  (duration: 123.971304ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-25T19:04:57.196599Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.642023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790924616199884 > lease_revoke:<id:40899b56e5c96e2e>","response":"size:28"}
	
	
	==> kernel <==
	 19:05:12 up 47 min,  0 user,  load average: 3.66, 2.94, 2.04
	Linux default-k8s-diff-port-960022 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3aa3159c3178dba42f58b963940a73d87ed0b361760a6b4cda22ce96594b70b9] <==
	I1225 19:04:21.599473       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:04:21.599743       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1225 19:04:21.599883       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:04:21.599932       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:04:21.599951       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:04:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:04:21.843093       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:04:21.843158       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:04:21.843177       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:04:21.843341       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:04:22.344342       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:04:22.344379       1 metrics.go:72] Registering metrics
	I1225 19:04:22.396504       1 controller.go:711] "Syncing nftables rules"
	I1225 19:04:31.802078       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:04:31.802145       1 main.go:301] handling current node
	I1225 19:04:41.803514       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:04:41.803591       1 main.go:301] handling current node
	I1225 19:04:51.802453       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:04:51.802491       1 main.go:301] handling current node
	I1225 19:05:01.802030       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:05:01.802078       1 main.go:301] handling current node
	I1225 19:05:11.811035       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:05:11.811074       1 main.go:301] handling current node
	
	
	==> kube-apiserver [deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3] <==
	I1225 19:04:20.376108       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1225 19:04:20.375829       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1225 19:04:20.379878       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:04:20.375863       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1225 19:04:20.376004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:04:20.376177       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1225 19:04:20.378754       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 19:04:20.378779       1 aggregator.go:171] initial CRD sync complete...
	I1225 19:04:20.385491       1 autoregister_controller.go:144] Starting autoregister controller
	I1225 19:04:20.385501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:04:20.385508       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:04:20.390795       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:04:20.429503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:04:20.446720       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:04:20.724949       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:04:20.753453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:04:20.774035       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:04:20.784207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:04:20.790434       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:04:20.824641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.107.182"}
	I1225 19:04:20.835251       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.42.14"}
	I1225 19:04:21.279490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:04:23.759997       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:04:24.208668       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:04:24.360755       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d] <==
	I1225 19:04:23.727961       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1225 19:04:23.730318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1225 19:04:23.730491       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1225 19:04:23.730605       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1225 19:04:23.731517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1225 19:04:23.734091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1225 19:04:23.735306       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1225 19:04:23.736528       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1225 19:04:23.738684       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1225 19:04:23.741014       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1225 19:04:23.754393       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1225 19:04:23.754432       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 19:04:23.754519       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 19:04:23.754525       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1225 19:04:23.754606       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 19:04:23.754679       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1225 19:04:23.754691       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-960022"
	I1225 19:04:23.754801       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1225 19:04:23.754852       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1225 19:04:23.754954       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1225 19:04:23.755051       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1225 19:04:23.757311       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1225 19:04:23.758686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1225 19:04:23.761114       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:04:23.790978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [132f0bde2b6bf2854770419c66dbc956a1f62dbc7f3be89c002b08f5c1f6eaa0] <==
	I1225 19:04:21.395198       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:04:21.459526       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 19:04:21.559930       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 19:04:21.559979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1225 19:04:21.560080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:04:21.578741       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:04:21.578801       1 server_linux.go:132] "Using iptables Proxier"
	I1225 19:04:21.585207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:04:21.585666       1 server.go:527] "Version info" version="v1.34.3"
	I1225 19:04:21.585697       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:21.587481       1 config.go:200] "Starting service config controller"
	I1225 19:04:21.587640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:04:21.587723       1 config.go:309] "Starting node config controller"
	I1225 19:04:21.587734       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:04:21.587739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:04:21.588164       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:04:21.588175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:04:21.588189       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:04:21.588203       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:04:21.687766       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:04:21.688931       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:04:21.688948       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09] <==
	I1225 19:04:19.289260       1 serving.go:386] Generated self-signed cert in-memory
	W1225 19:04:20.327056       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 19:04:20.327094       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 19:04:20.327105       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 19:04:20.327114       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 19:04:20.371017       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1225 19:04:20.371045       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:20.373846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:04:20.373888       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:04:20.374263       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:04:20.374348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:04:20.474940       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 19:04:24 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:24.275674     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhpz\" (UniqueName: \"kubernetes.io/projected/877f70b3-c96c-4876-8dbe-f0ad7d7e0a01-kube-api-access-6mhpz\") pod \"kubernetes-dashboard-855c9754f9-hm5lx\" (UID: \"877f70b3-c96c-4876-8dbe-f0ad7d7e0a01\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm5lx"
	Dec 25 19:04:24 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:24.275743     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/877f70b3-c96c-4876-8dbe-f0ad7d7e0a01-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hm5lx\" (UID: \"877f70b3-c96c-4876-8dbe-f0ad7d7e0a01\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm5lx"
	Dec 25 19:04:25 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:25.430140     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 25 19:04:28 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:28.051169     732 scope.go:117] "RemoveContainer" containerID="fe9a8db3687bc9761a621fa1ff2579fd157df8850787a27f2f0b9ed4be852715"
	Dec 25 19:04:29 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:29.056179     732 scope.go:117] "RemoveContainer" containerID="fe9a8db3687bc9761a621fa1ff2579fd157df8850787a27f2f0b9ed4be852715"
	Dec 25 19:04:29 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:29.056352     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:29 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:29.056562     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:30 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:30.059052     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:30 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:30.059272     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:31 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:31.823301     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm5lx" podStartSLOduration=1.9369101039999999 podStartE2EDuration="7.823275852s" podCreationTimestamp="2025-12-25 19:04:24 +0000 UTC" firstStartedPulling="2025-12-25 19:04:24.480595068 +0000 UTC m=+6.601216581" lastFinishedPulling="2025-12-25 19:04:30.366960831 +0000 UTC m=+12.487582329" observedRunningTime="2025-12-25 19:04:31.077271364 +0000 UTC m=+13.197892883" watchObservedRunningTime="2025-12-25 19:04:31.823275852 +0000 UTC m=+13.943897371"
	Dec 25 19:04:35 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:35.729098     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:35 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:35.729315     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:49 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:49.982483     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:50 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:50.118732     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:50 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:50.119001     732 scope.go:117] "RemoveContainer" containerID="14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	Dec 25 19:04:50 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:50.119204     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:52 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:52.127013     732 scope.go:117] "RemoveContainer" containerID="f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5"
	Dec 25 19:04:55 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:55.728467     732 scope.go:117] "RemoveContainer" containerID="14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	Dec 25 19:04:55 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:55.728684     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:05:07 default-k8s-diff-port-960022 kubelet[732]: I1225 19:05:07.982543     732 scope.go:117] "RemoveContainer" containerID="14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	Dec 25 19:05:07 default-k8s-diff-port-960022 kubelet[732]: E1225 19:05:07.982763     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: kubelet.service: Consumed 1.695s CPU time.
	
	
	==> kubernetes-dashboard [d0ee12735cd4db3a4f33b6c01940acfb704c79ae33d33dd565e52a63afdb2b14] <==
	2025/12/25 19:04:30 Starting overwatch
	2025/12/25 19:04:30 Using namespace: kubernetes-dashboard
	2025/12/25 19:04:30 Using in-cluster config to connect to apiserver
	2025/12/25 19:04:30 Using secret token for csrf signing
	2025/12/25 19:04:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:04:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:04:30 Successful initial request to the apiserver, version: v1.34.3
	2025/12/25 19:04:30 Generating JWE encryption key
	2025/12/25 19:04:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:04:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:04:30 Initializing JWE encryption key from synchronized object
	2025/12/25 19:04:30 Creating in-cluster Sidecar client
	2025/12/25 19:04:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:04:30 Serving insecurely on HTTP port: 9090
	2025/12/25 19:05:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb] <==
	I1225 19:04:52.177151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:04:52.184243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:04:52.184280       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:04:52.186881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:04:55.641909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:04:59.902044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:03.500341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:06.554091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:09.576255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:09.581054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:05:09.581183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:05:09.581348       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7550cadf-4431-4746-a11e-df2346058022", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-960022_613ba5f7-7d76-4286-be22-5c4833f040bd became leader
	I1225 19:05:09.581361       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-960022_613ba5f7-7d76-4286-be22-5c4833f040bd!
	W1225 19:05:09.587195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:09.590499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:05:09.682298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-960022_613ba5f7-7d76-4286-be22-5c4833f040bd!
	W1225 19:05:11.593432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:11.596888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5] <==
	I1225 19:04:21.362187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:04:51.364657       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022: exit status 2 (389.363747ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-960022
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-960022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f",
	        "Created": "2025-12-25T19:03:07.962087481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-25T19:04:11.623555976Z",
	            "FinishedAt": "2025-12-25T19:04:10.599028284Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/hosts",
	        "LogPath": "/var/lib/docker/containers/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f/e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f-json.log",
	        "Name": "/default-k8s-diff-port-960022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-960022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-960022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e715f5c007f682ea129fd33b0f719ca5682bfd93ff193a553aa1f39c184e3d0f",
	                "LowerDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c-init/diff:/var/lib/docker/overlay2/8152586e7e91edad0090b5c322534edd1346ae6dc28cbca1827aa4c23f366758/diff",
	                "MergedDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/183acc595d1c6327748578242623306ecba85c5f3e4e2d46fbcc0037e6eeba8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-960022",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-960022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-960022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-960022",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-960022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ebebc9af22b7525259a240328c212757ccc0bee502bb725cfaa662b5c90d4c9a",
	            "SandboxKey": "/var/run/docker/netns/ebebc9af22b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-960022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6496648f4bb9e6db2a787d51dc81aaa3ff1aaea70439b67d588aff1a80515c8b",
	                    "EndpointID": "93bf82f077409f18f03edfafd6ad776887e64929b2afc54fcdb9f7399cea1325",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "c6:bf:f2:cb:c6:ae",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-960022",
	                        "e715f5c007f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022: exit status 2 (420.791112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-960022 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-960022 logs -n 25: (1.087367828s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-910464 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl cat docker --no-pager                                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/docker/daemon.json                                                                                        │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo docker system info                                                                                                 │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cri-dockerd --version                                                                                              │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ ssh     │ -p auto-910464 sudo systemctl cat containerd --no-pager                                                                                │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo cat /etc/containerd/config.toml                                                                                    │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo containerd config dump                                                                                             │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo systemctl cat crio --no-pager                                                                                      │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ ssh     │ -p auto-910464 sudo crio config                                                                                                        │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ delete  │ -p auto-910464                                                                                                                         │ auto-910464                  │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │ 25 Dec 25 19:04 UTC │
	│ start   │ -p calico-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-910464                │ jenkins │ v1.37.0 │ 25 Dec 25 19:04 UTC │                     │
	│ image   │ default-k8s-diff-port-960022 image list --format=json                                                                                  │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:05 UTC │ 25 Dec 25 19:05 UTC │
	│ pause   │ -p default-k8s-diff-port-960022 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-960022 │ jenkins │ v1.37.0 │ 25 Dec 25 19:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 19:04:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 19:04:52.951715  325002 out.go:360] Setting OutFile to fd 1 ...
	I1225 19:04:52.952031  325002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:52.952043  325002 out.go:374] Setting ErrFile to fd 2...
	I1225 19:04:52.952049  325002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 19:04:52.952394  325002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 19:04:52.953046  325002 out.go:368] Setting JSON to false
	I1225 19:04:52.954234  325002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2841,"bootTime":1766686652,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 19:04:52.954301  325002 start.go:143] virtualization: kvm guest
	I1225 19:04:52.955712  325002 out.go:179] * [calico-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 19:04:52.957049  325002 notify.go:221] Checking for updates...
	I1225 19:04:52.957077  325002 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 19:04:52.958284  325002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 19:04:52.959695  325002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:52.960787  325002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 19:04:52.961823  325002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 19:04:52.962903  325002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 19:04:48.859576  316482 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1225 19:04:48.863516  316482 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1225 19:04:48.863530  316482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1225 19:04:48.876029  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1225 19:04:49.106442  316482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1225 19:04:49.106558  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:49.106557  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-910464 minikube.k8s.io/updated_at=2025_12_25T19_04_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef minikube.k8s.io/name=kindnet-910464 minikube.k8s.io/primary=true
	I1225 19:04:49.119914  316482 ops.go:34] apiserver oom_adj: -16
	I1225 19:04:49.223761  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:49.724563  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:50.223805  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:50.724009  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:51.224535  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:51.724730  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:52.224614  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:52.723856  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:52.964776  325002 config.go:182] Loaded profile config "default-k8s-diff-port-960022": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:52.964974  325002 config.go:182] Loaded profile config "kindnet-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:52.965115  325002 config.go:182] Loaded profile config "kubernetes-upgrade-498224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 19:04:52.965257  325002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 19:04:52.996718  325002 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 19:04:52.996819  325002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:53.063037  325002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:04:53.05183533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:53.063137  325002 docker.go:319] overlay module found
	I1225 19:04:53.064924  325002 out.go:179] * Using the docker driver based on user configuration
	I1225 19:04:53.066228  325002 start.go:309] selected driver: docker
	I1225 19:04:53.066242  325002 start.go:928] validating driver "docker" against <nil>
	I1225 19:04:53.066257  325002 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 19:04:53.067027  325002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 19:04:53.129699  325002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-25 19:04:53.118804631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 19:04:53.129884  325002 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 19:04:53.130211  325002 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:04:53.131854  325002 out.go:179] * Using Docker driver with root privileges
	I1225 19:04:53.133643  325002 cni.go:84] Creating CNI manager for "calico"
	I1225 19:04:53.133671  325002 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1225 19:04:53.133751  325002 start.go:353] cluster config:
	{Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:04:53.135296  325002 out.go:179] * Starting "calico-910464" primary control-plane node in "calico-910464" cluster
	I1225 19:04:53.136546  325002 cache.go:134] Beginning downloading kic base image for docker with crio
	I1225 19:04:53.137951  325002 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1225 19:04:53.139149  325002 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:53.139197  325002 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1225 19:04:53.139212  325002 cache.go:65] Caching tarball of preloaded images
	I1225 19:04:53.139232  325002 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1225 19:04:53.139332  325002 preload.go:251] Found /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1225 19:04:53.139348  325002 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1225 19:04:53.139476  325002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/config.json ...
	I1225 19:04:53.139510  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/config.json: {Name:mk694e835f93aef7a3573ddd262d5970b3f92ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:53.164848  325002 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1225 19:04:53.164873  325002 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1225 19:04:53.164902  325002 cache.go:243] Successfully downloaded all kic artifacts
	I1225 19:04:53.164938  325002 start.go:360] acquireMachinesLock for calico-910464: {Name:mkc09de11839eab5406205339afa568256a29ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1225 19:04:53.165050  325002 start.go:364] duration metric: took 91.049µs to acquireMachinesLock for "calico-910464"
	I1225 19:04:53.165079  325002 start.go:93] Provisioning new machine with config: &{Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:53.165183  325002 start.go:125] createHost starting for "" (driver="docker")
	I1225 19:04:53.224502  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:53.724678  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:54.224013  316482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1225 19:04:54.304636  316482 kubeadm.go:1114] duration metric: took 5.198160942s to wait for elevateKubeSystemPrivileges
	I1225 19:04:54.304669  316482 kubeadm.go:403] duration metric: took 16.541963807s to StartCluster
	I1225 19:04:54.304689  316482 settings.go:142] acquiring lock: {Name:mk8db67a95daebdad9164c803819dcb179c3006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:54.304763  316482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 19:04:54.305955  316482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/kubeconfig: {Name:mk959de02482281f87c2171d9b2421941fad1e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:04:54.306228  316482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1225 19:04:54.306247  316482 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1225 19:04:54.306224  316482 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1225 19:04:54.306333  316482 addons.go:70] Setting storage-provisioner=true in profile "kindnet-910464"
	I1225 19:04:54.306352  316482 addons.go:239] Setting addon storage-provisioner=true in "kindnet-910464"
	I1225 19:04:54.306379  316482 host.go:66] Checking if "kindnet-910464" exists ...
	I1225 19:04:54.306383  316482 addons.go:70] Setting default-storageclass=true in profile "kindnet-910464"
	I1225 19:04:54.306494  316482 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-910464"
	I1225 19:04:54.306414  316482 config.go:182] Loaded profile config "kindnet-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:54.306878  316482 cli_runner.go:164] Run: docker container inspect kindnet-910464 --format={{.State.Status}}
	I1225 19:04:54.307071  316482 cli_runner.go:164] Run: docker container inspect kindnet-910464 --format={{.State.Status}}
	I1225 19:04:54.308749  316482 out.go:179] * Verifying Kubernetes components...
	I1225 19:04:54.309968  316482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:04:54.331153  316482 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1225 19:04:54.331760  316482 addons.go:239] Setting addon default-storageclass=true in "kindnet-910464"
	I1225 19:04:54.331805  316482 host.go:66] Checking if "kindnet-910464" exists ...
	I1225 19:04:54.332541  316482 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:54.332563  316482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1225 19:04:54.332582  316482 cli_runner.go:164] Run: docker container inspect kindnet-910464 --format={{.State.Status}}
	I1225 19:04:54.332617  316482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-910464
	I1225 19:04:54.366005  316482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/kindnet-910464/id_rsa Username:docker}
	I1225 19:04:54.366675  316482 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:54.366696  316482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1225 19:04:54.367514  316482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-910464
	I1225 19:04:54.392827  316482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/kindnet-910464/id_rsa Username:docker}
	I1225 19:04:54.414271  316482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1225 19:04:54.482956  316482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:04:54.486678  316482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1225 19:04:54.521144  316482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1225 19:04:54.649523  316482 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1225 19:04:54.651614  316482 node_ready.go:35] waiting up to 15m0s for node "kindnet-910464" to be "Ready" ...
	I1225 19:04:54.884597  316482 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1225 19:04:51.492224  310133 pod_ready.go:104] pod "coredns-66bc5c9577-c9wmz" is not "Ready", error: <nil>
	W1225 19:04:53.991719  310133 pod_ready.go:104] pod "coredns-66bc5c9577-c9wmz" is not "Ready", error: <nil>
	I1225 19:04:55.492201  310133 pod_ready.go:94] pod "coredns-66bc5c9577-c9wmz" is "Ready"
	I1225 19:04:55.492235  310133 pod_ready.go:86] duration metric: took 33.505935854s for pod "coredns-66bc5c9577-c9wmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.494805  310133 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.498814  310133 pod_ready.go:94] pod "etcd-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:55.498840  310133 pod_ready.go:86] duration metric: took 4.013465ms for pod "etcd-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.500633  310133 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.504591  310133 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:55.504618  310133 pod_ready.go:86] duration metric: took 3.959115ms for pod "kube-apiserver-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.506601  310133 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.690863  310133 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:55.690919  310133 pod_ready.go:86] duration metric: took 184.292647ms for pod "kube-controller-manager-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:55.890498  310133 pod_ready.go:83] waiting for pod "kube-proxy-wl784" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.290458  310133 pod_ready.go:94] pod "kube-proxy-wl784" is "Ready"
	I1225 19:04:56.290485  310133 pod_ready.go:86] duration metric: took 399.960959ms for pod "kube-proxy-wl784" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.490461  310133 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.890596  310133 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-960022" is "Ready"
	I1225 19:04:56.890633  310133 pod_ready.go:86] duration metric: took 400.146786ms for pod "kube-scheduler-default-k8s-diff-port-960022" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:04:56.890645  310133 pod_ready.go:40] duration metric: took 34.908194557s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:04:56.933756  310133 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:04:56.943127  310133 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-960022" cluster and "default" namespace by default
	I1225 19:04:52.591910  260034 cri.go:96] found id: ""
	I1225 19:04:52.591937  260034 logs.go:282] 0 containers: []
	W1225 19:04:52.591945  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:52.591951  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:52.592015  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:52.619687  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:52.619713  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:52.619719  260034 cri.go:96] found id: ""
	I1225 19:04:52.619728  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:52.619788  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:52.623822  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:52.627474  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:52.627535  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:52.668060  260034 cri.go:96] found id: ""
	I1225 19:04:52.668097  260034 logs.go:282] 0 containers: []
	W1225 19:04:52.668109  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:52.668116  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:52.668183  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:52.701512  260034 cri.go:96] found id: ""
	I1225 19:04:52.701539  260034 logs.go:282] 0 containers: []
	W1225 19:04:52.701549  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:52.701561  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:52.701583  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:52.732654  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:52.732681  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:52.798236  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:52.798278  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:52.834440  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:52.834472  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:52.929537  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:52.929566  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:52.946267  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:52.946298  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:04:53.013992  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:04:53.014014  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:53.014029  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:53.058130  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:53.058171  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:53.096147  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:53.096182  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:53.127705  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:53.127738  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:53.165476  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:53.165502  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:53.194090  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:04:53.194115  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:55.725039  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:04:55.725481  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:04:55.725536  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:04:55.725592  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:04:55.757008  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:55.757032  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:55.757037  260034 cri.go:96] found id: ""
	I1225 19:04:55.757045  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:04:55.757090  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.761908  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.765725  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:04:55.765776  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:04:55.794865  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:55.794889  260034 cri.go:96] found id: ""
	I1225 19:04:55.794919  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:04:55.794988  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.799014  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:04:55.799077  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:04:55.827783  260034 cri.go:96] found id: ""
	I1225 19:04:55.827807  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.827815  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:04:55.827823  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:04:55.827873  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:04:55.858521  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:55.858550  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:55.858558  260034 cri.go:96] found id: ""
	I1225 19:04:55.858569  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:04:55.858628  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.862646  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.866385  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:04:55.866446  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:04:55.894107  260034 cri.go:96] found id: ""
	I1225 19:04:55.894128  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.894136  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:55.894142  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:55.894188  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:55.924140  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:55.924163  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:55.924167  260034 cri.go:96] found id: ""
	I1225 19:04:55.924174  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:55.924221  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.928308  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:55.931971  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:55.932033  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:55.961310  260034 cri.go:96] found id: ""
	I1225 19:04:55.961337  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.961350  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:55.961357  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:55.961420  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:55.990145  260034 cri.go:96] found id: ""
	I1225 19:04:55.990173  260034 logs.go:282] 0 containers: []
	W1225 19:04:55.990186  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:55.990197  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:55.990210  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:56.084901  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:56.084940  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:56.130778  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:56.130811  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:56.161334  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:04:56.161364  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:56.190314  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:56.190344  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:56.221309  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:56.221333  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:56.278334  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:56.278365  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:56.293021  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:56.293044  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:04:56.350363  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:04:56.350388  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:56.350407  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:56.380370  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:56.380398  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:56.412970  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:56.412997  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:56.441800  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:56.441826  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:53.167769  325002 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1225 19:04:53.168069  325002 start.go:159] libmachine.API.Create for "calico-910464" (driver="docker")
	I1225 19:04:53.168105  325002 client.go:173] LocalClient.Create starting
	I1225 19:04:53.168189  325002 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem
	I1225 19:04:53.168233  325002 main.go:144] libmachine: Decoding PEM data...
	I1225 19:04:53.168262  325002 main.go:144] libmachine: Parsing certificate...
	I1225 19:04:53.168339  325002 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem
	I1225 19:04:53.168370  325002 main.go:144] libmachine: Decoding PEM data...
	I1225 19:04:53.168390  325002 main.go:144] libmachine: Parsing certificate...
	I1225 19:04:53.168748  325002 cli_runner.go:164] Run: docker network inspect calico-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1225 19:04:53.189582  325002 cli_runner.go:211] docker network inspect calico-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1225 19:04:53.189673  325002 network_create.go:284] running [docker network inspect calico-910464] to gather additional debugging logs...
	I1225 19:04:53.189715  325002 cli_runner.go:164] Run: docker network inspect calico-910464
	W1225 19:04:53.208205  325002 cli_runner.go:211] docker network inspect calico-910464 returned with exit code 1
	I1225 19:04:53.208238  325002 network_create.go:287] error running [docker network inspect calico-910464]: docker network inspect calico-910464: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-910464 not found
	I1225 19:04:53.208250  325002 network_create.go:289] output of [docker network inspect calico-910464]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-910464 not found
	
	** /stderr **
	I1225 19:04:53.208332  325002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:04:53.229548  325002 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
	I1225 19:04:53.230572  325002 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7e79553acc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:4f:4f:8b:03:9b} reservation:<nil>}
	I1225 19:04:53.231704  325002 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f47bec209e15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:e9:83:11:22:b7} reservation:<nil>}
	I1225 19:04:53.232803  325002 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb83f0}
	I1225 19:04:53.232840  325002 network_create.go:124] attempt to create docker network calico-910464 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1225 19:04:53.232913  325002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-910464 calico-910464
	I1225 19:04:53.285417  325002 network_create.go:108] docker network calico-910464 192.168.76.0/24 created
	I1225 19:04:53.285452  325002 kic.go:121] calculated static IP "192.168.76.2" for the "calico-910464" container
	I1225 19:04:53.285538  325002 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1225 19:04:53.305038  325002 cli_runner.go:164] Run: docker volume create calico-910464 --label name.minikube.sigs.k8s.io=calico-910464 --label created_by.minikube.sigs.k8s.io=true
	I1225 19:04:53.323566  325002 oci.go:103] Successfully created a docker volume calico-910464
	I1225 19:04:53.323651  325002 cli_runner.go:164] Run: docker run --rm --name calico-910464-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-910464 --entrypoint /usr/bin/test -v calico-910464:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1225 19:04:53.748243  325002 oci.go:107] Successfully prepared a docker volume calico-910464
	I1225 19:04:53.748303  325002 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:04:53.748315  325002 kic.go:194] Starting extracting preloaded images to volume ...
	I1225 19:04:53.748378  325002 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1225 19:04:54.889023  316482 addons.go:530] duration metric: took 582.767296ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1225 19:04:55.154844  316482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-910464" context rescaled to 1 replicas
	W1225 19:04:56.654343  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	I1225 19:04:58.212735  325002 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-910464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.464313069s)
	I1225 19:04:58.212784  325002 kic.go:203] duration metric: took 4.464465034s to extract preloaded images to volume ...
	W1225 19:04:58.212874  325002 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1225 19:04:58.212928  325002 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1225 19:04:58.212982  325002 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1225 19:04:58.266410  325002 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-910464 --name calico-910464 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-910464 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-910464 --network calico-910464 --ip 192.168.76.2 --volume calico-910464:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1225 19:04:58.534175  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Running}}
	I1225 19:04:58.552005  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Status}}
	I1225 19:04:58.570282  325002 cli_runner.go:164] Run: docker exec calico-910464 stat /var/lib/dpkg/alternatives/iptables
	I1225 19:04:58.618195  325002 oci.go:144] the created container "calico-910464" has a running status.
	I1225 19:04:58.618234  325002 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa...
	I1225 19:04:58.685066  325002 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1225 19:04:58.710001  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Status}}
	I1225 19:04:58.733851  325002 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1225 19:04:58.733877  325002 kic_runner.go:114] Args: [docker exec --privileged calico-910464 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1225 19:04:58.780749  325002 cli_runner.go:164] Run: docker container inspect calico-910464 --format={{.State.Status}}
	I1225 19:04:58.806445  325002 machine.go:94] provisionDockerMachine start ...
	I1225 19:04:58.806538  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:58.828831  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:58.829302  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:58.829322  325002 main.go:144] libmachine: About to run SSH command:
	hostname
	I1225 19:04:58.962762  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: calico-910464
	
	I1225 19:04:58.962789  325002 ubuntu.go:182] provisioning hostname "calico-910464"
	I1225 19:04:58.962855  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:58.983861  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:58.984191  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:58.984212  325002 main.go:144] libmachine: About to run SSH command:
	sudo hostname calico-910464 && echo "calico-910464" | sudo tee /etc/hostname
	I1225 19:04:59.122249  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: calico-910464
	
	I1225 19:04:59.122347  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:59.144627  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:59.144866  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:59.144884  325002 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-910464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-910464/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-910464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1225 19:04:59.271213  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1225 19:04:59.271250  325002 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-5579/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-5579/.minikube}
	I1225 19:04:59.271310  325002 ubuntu.go:190] setting up certificates
	I1225 19:04:59.271328  325002 provision.go:84] configureAuth start
	I1225 19:04:59.271397  325002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-910464
	I1225 19:04:59.289996  325002 provision.go:143] copyHostCerts
	I1225 19:04:59.290050  325002 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem, removing ...
	I1225 19:04:59.290058  325002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem
	I1225 19:04:59.290124  325002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/ca.pem (1078 bytes)
	I1225 19:04:59.290231  325002 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem, removing ...
	I1225 19:04:59.290243  325002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem
	I1225 19:04:59.290287  325002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/cert.pem (1123 bytes)
	I1225 19:04:59.290397  325002 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem, removing ...
	I1225 19:04:59.290410  325002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem
	I1225 19:04:59.290448  325002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-5579/.minikube/key.pem (1679 bytes)
	I1225 19:04:59.290543  325002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem org=jenkins.calico-910464 san=[127.0.0.1 192.168.76.2 calico-910464 localhost minikube]
	I1225 19:04:59.635604  325002 provision.go:177] copyRemoteCerts
	I1225 19:04:59.635662  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1225 19:04:59.635697  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:59.654388  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:04:59.748267  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1225 19:04:59.767548  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1225 19:04:59.784667  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1225 19:04:59.802481  325002 provision.go:87] duration metric: took 531.133852ms to configureAuth
	I1225 19:04:59.802507  325002 ubuntu.go:206] setting minikube options for container-runtime
	I1225 19:04:59.802670  325002 config.go:182] Loaded profile config "calico-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 19:04:59.802778  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:04:59.824163  325002 main.go:144] libmachine: Using SSH client type: native
	I1225 19:04:59.824410  325002 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1225 19:04:59.824435  325002 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1225 19:05:00.083498  325002 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1225 19:05:00.083525  325002 machine.go:97] duration metric: took 1.277055226s to provisionDockerMachine
	I1225 19:05:00.083536  325002 client.go:176] duration metric: took 6.915424073s to LocalClient.Create
	I1225 19:05:00.083555  325002 start.go:167] duration metric: took 6.915485451s to libmachine.API.Create "calico-910464"
	I1225 19:05:00.083564  325002 start.go:293] postStartSetup for "calico-910464" (driver="docker")
	I1225 19:05:00.083578  325002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1225 19:05:00.083635  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1225 19:05:00.083672  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.102133  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.195425  325002 ssh_runner.go:195] Run: cat /etc/os-release
	I1225 19:05:00.198888  325002 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1225 19:05:00.198947  325002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1225 19:05:00.198961  325002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/addons for local assets ...
	I1225 19:05:00.199015  325002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-5579/.minikube/files for local assets ...
	I1225 19:05:00.199086  325002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem -> 91122.pem in /etc/ssl/certs
	I1225 19:05:00.199171  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1225 19:05:00.207000  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:05:00.227005  325002 start.go:296] duration metric: took 143.424254ms for postStartSetup
	I1225 19:05:00.227302  325002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-910464
	I1225 19:05:00.245273  325002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/config.json ...
	I1225 19:05:00.245527  325002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 19:05:00.245565  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.264074  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.352294  325002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1225 19:05:00.357175  325002 start.go:128] duration metric: took 7.191978561s to createHost
	I1225 19:05:00.357200  325002 start.go:83] releasing machines lock for "calico-910464", held for 7.192138157s
	I1225 19:05:00.357260  325002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-910464
	I1225 19:05:00.375679  325002 ssh_runner.go:195] Run: cat /version.json
	I1225 19:05:00.375733  325002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1225 19:05:00.375789  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.375736  325002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-910464
	I1225 19:05:00.396793  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.396995  325002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/calico-910464/id_rsa Username:docker}
	I1225 19:05:00.540046  325002 ssh_runner.go:195] Run: systemctl --version
	I1225 19:05:00.546611  325002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1225 19:05:00.581834  325002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1225 19:05:00.586641  325002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1225 19:05:00.586711  325002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1225 19:05:00.613728  325002 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1225 19:05:00.613751  325002 start.go:496] detecting cgroup driver to use...
	I1225 19:05:00.613787  325002 detect.go:190] detected "systemd" cgroup driver on host os
	I1225 19:05:00.613830  325002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1225 19:05:00.629644  325002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1225 19:05:00.642001  325002 docker.go:218] disabling cri-docker service (if available) ...
	I1225 19:05:00.642056  325002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1225 19:05:00.659490  325002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1225 19:05:00.676634  325002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1225 19:05:00.761907  325002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1225 19:05:00.850355  325002 docker.go:234] disabling docker service ...
	I1225 19:05:00.850416  325002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1225 19:05:00.868752  325002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1225 19:05:00.881414  325002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1225 19:05:00.964238  325002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1225 19:05:01.049877  325002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1225 19:05:01.062744  325002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1225 19:05:01.076735  325002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1225 19:05:01.076786  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.086586  325002 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1225 19:05:01.086639  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.095654  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.104978  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.114635  325002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1225 19:05:01.123768  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.133113  325002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.147989  325002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1225 19:05:01.158491  325002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1225 19:05:01.166169  325002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1225 19:05:01.173623  325002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:05:01.253979  325002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1225 19:05:01.380441  325002 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1225 19:05:01.380511  325002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1225 19:05:01.384524  325002 start.go:574] Will wait 60s for crictl version
	I1225 19:05:01.384586  325002 ssh_runner.go:195] Run: which crictl
	I1225 19:05:01.388229  325002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1225 19:05:01.413148  325002 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1225 19:05:01.413225  325002 ssh_runner.go:195] Run: crio --version
	I1225 19:05:01.440807  325002 ssh_runner.go:195] Run: crio --version
	I1225 19:05:01.469406  325002 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1225 19:04:58.972954  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:04:58.973362  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:04:58.973415  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:04:58.973466  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:04:59.003827  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:59.003850  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:59.003856  260034 cri.go:96] found id: ""
	I1225 19:04:59.003865  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:04:59.003929  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.008142  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.011923  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:04:59.011990  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:04:59.040983  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:59.041005  260034 cri.go:96] found id: ""
	I1225 19:04:59.041013  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:04:59.041068  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.045125  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:04:59.045205  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:04:59.072631  260034 cri.go:96] found id: ""
	I1225 19:04:59.072659  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.072670  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:04:59.072678  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:04:59.072730  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:04:59.099249  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:59.099271  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:04:59.099278  260034 cri.go:96] found id: ""
	I1225 19:04:59.099287  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:04:59.099347  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.103348  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.107017  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:04:59.107078  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:04:59.139040  260034 cri.go:96] found id: ""
	I1225 19:04:59.139069  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.139081  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:04:59.139088  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:04:59.139145  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:04:59.167758  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:59.167780  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:59.167787  260034 cri.go:96] found id: ""
	I1225 19:04:59.167796  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:04:59.167871  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.171879  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:04:59.175852  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:04:59.175935  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:04:59.206028  260034 cri.go:96] found id: ""
	I1225 19:04:59.206051  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.206060  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:04:59.206065  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:04:59.206112  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:04:59.234022  260034 cri.go:96] found id: ""
	I1225 19:04:59.234047  260034 logs.go:282] 0 containers: []
	W1225 19:04:59.234055  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:04:59.234064  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:04:59.234077  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:04:59.259332  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:04:59.259357  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:04:59.316762  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:04:59.316794  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:04:59.403475  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:04:59.403505  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:04:59.460425  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:04:59.460456  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:04:59.460474  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:04:59.491507  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:04:59.491536  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:04:59.526042  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:04:59.526071  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:04:59.561533  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:04:59.561565  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:04:59.587730  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:04:59.587758  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:04:59.619554  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:04:59.619578  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:04:59.632881  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:04:59.632926  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:04:59.661634  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:04:59.661655  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:02.189961  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:05:02.190386  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:05:02.190430  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:05:02.190481  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:05:02.219121  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:02.219139  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:02.219143  260034 cri.go:96] found id: ""
	I1225 19:05:02.219151  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:05:02.219192  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.223013  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.226952  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:05:02.227007  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:05:02.255257  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:02.255281  260034 cri.go:96] found id: ""
	I1225 19:05:02.255291  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:05:02.255354  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.259448  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:05:02.259503  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:05:02.285751  260034 cri.go:96] found id: ""
	I1225 19:05:02.285778  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.285789  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:05:02.285800  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:05:02.285856  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:05:02.313754  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:02.313777  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:02.313784  260034 cri.go:96] found id: ""
	I1225 19:05:02.313794  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:05:02.313847  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.318213  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.322440  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:05:02.322493  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:05:02.349734  260034 cri.go:96] found id: ""
	I1225 19:05:02.349756  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.349765  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:05:02.349771  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:05:02.349828  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:05:02.377326  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:02.377347  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:02.377352  260034 cri.go:96] found id: ""
	I1225 19:05:02.377361  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:05:02.377416  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.381402  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:02.385131  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:05:02.385195  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:05:02.413654  260034 cri.go:96] found id: ""
	I1225 19:05:02.413677  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.413685  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:05:02.413690  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:05:02.413740  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:05:02.441502  260034 cri.go:96] found id: ""
	I1225 19:05:02.441523  260034 logs.go:282] 0 containers: []
	W1225 19:05:02.441532  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:05:02.441539  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:05:02.441549  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:05:02.498220  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:05:02.498247  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:05:02.528748  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:05:02.528783  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:05:01.470755  325002 cli_runner.go:164] Run: docker network inspect calico-910464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1225 19:05:01.487991  325002 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1225 19:05:01.492207  325002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:05:01.502874  325002 kubeadm.go:884] updating cluster {Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1225 19:05:01.503024  325002 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1225 19:05:01.503069  325002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:05:01.535512  325002 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:05:01.535530  325002 crio.go:433] Images already preloaded, skipping extraction
	I1225 19:05:01.535573  325002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1225 19:05:01.560536  325002 crio.go:561] all images are preloaded for cri-o runtime.
	I1225 19:05:01.560557  325002 cache_images.go:86] Images are preloaded, skipping loading
	I1225 19:05:01.560563  325002 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1225 19:05:01.560644  325002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-910464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1225 19:05:01.560703  325002 ssh_runner.go:195] Run: crio config
	I1225 19:05:01.607803  325002 cni.go:84] Creating CNI manager for "calico"
	I1225 19:05:01.607828  325002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1225 19:05:01.607849  325002 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-910464 NodeName:calico-910464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1225 19:05:01.607982  325002 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-910464"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1225 19:05:01.608042  325002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1225 19:05:01.616213  325002 binaries.go:51] Found k8s binaries, skipping transfer
	I1225 19:05:01.616280  325002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1225 19:05:01.623810  325002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1225 19:05:01.636762  325002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1225 19:05:01.651517  325002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1225 19:05:01.664867  325002 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1225 19:05:01.668603  325002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1225 19:05:01.678642  325002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1225 19:05:01.763619  325002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1225 19:05:01.786759  325002 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464 for IP: 192.168.76.2
	I1225 19:05:01.786781  325002 certs.go:195] generating shared ca certs ...
	I1225 19:05:01.786796  325002 certs.go:227] acquiring lock for ca certs: {Name:mkc96ab6366f062029d385d20297063671b19bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:01.786987  325002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key
	I1225 19:05:01.787057  325002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key
	I1225 19:05:01.787076  325002 certs.go:257] generating profile certs ...
	I1225 19:05:01.787160  325002 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.key
	I1225 19:05:01.787182  325002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.crt with IP's: []
	I1225 19:05:01.882126  325002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.crt ...
	I1225 19:05:01.882154  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.crt: {Name:mk4a61814cd88fa168d655fbd09949c88a89e8be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:01.882359  325002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.key ...
	I1225 19:05:01.882377  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/client.key: {Name:mk12b74d72a24bef28d951f4c17d80affedb5701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:01.882486  325002 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0
	I1225 19:05:01.882502  325002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1225 19:05:02.040289  325002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0 ...
	I1225 19:05:02.040330  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0: {Name:mkc8e32a96a4cc7aa0bb8b50086bc36890de6d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.040541  325002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0 ...
	I1225 19:05:02.040566  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0: {Name:mk38b5fc28f106c9b8ee129efce965b268b814a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.040678  325002 certs.go:382] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt.3b7551d0 -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt
	I1225 19:05:02.040803  325002 certs.go:386] copying /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key.3b7551d0 -> /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key
	I1225 19:05:02.040906  325002 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key
	I1225 19:05:02.040928  325002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt with IP's: []
	I1225 19:05:02.090477  325002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt ...
	I1225 19:05:02.090505  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt: {Name:mk8f56ef8b761215e363a7f8cb18b671b8bed273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.090662  325002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key ...
	I1225 19:05:02.090672  325002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key: {Name:mk9eeb3e5925dd6bc2d6ddc251f2048fad80b60f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1225 19:05:02.090848  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem (1338 bytes)
	W1225 19:05:02.090890  325002 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112_empty.pem, impossibly tiny 0 bytes
	I1225 19:05:02.090919  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca-key.pem (1679 bytes)
	I1225 19:05:02.090958  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/ca.pem (1078 bytes)
	I1225 19:05:02.090983  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/cert.pem (1123 bytes)
	I1225 19:05:02.091008  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/certs/key.pem (1679 bytes)
	I1225 19:05:02.091051  325002 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem (1708 bytes)
	I1225 19:05:02.091611  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1225 19:05:02.110093  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1225 19:05:02.127266  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1225 19:05:02.144416  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1225 19:05:02.162174  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1225 19:05:02.178839  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1225 19:05:02.196468  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1225 19:05:02.216496  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/calico-910464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1225 19:05:02.235498  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1225 19:05:02.256575  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/certs/9112.pem --> /usr/share/ca-certificates/9112.pem (1338 bytes)
	I1225 19:05:02.274289  325002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/ssl/certs/91122.pem --> /usr/share/ca-certificates/91122.pem (1708 bytes)
	I1225 19:05:02.292749  325002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1225 19:05:02.305357  325002 ssh_runner.go:195] Run: openssl version
	I1225 19:05:02.312461  325002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.320280  325002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1225 19:05:02.327649  325002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.331410  325002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 25 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.331458  325002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1225 19:05:02.369377  325002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1225 19:05:02.378965  325002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1225 19:05:02.387311  325002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.394711  325002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9112.pem /etc/ssl/certs/9112.pem
	I1225 19:05:02.402883  325002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.407060  325002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 25 18:34 /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.407118  325002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9112.pem
	I1225 19:05:02.446705  325002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1225 19:05:02.455005  325002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9112.pem /etc/ssl/certs/51391683.0
	I1225 19:05:02.463143  325002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.470457  325002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/91122.pem /etc/ssl/certs/91122.pem
	I1225 19:05:02.477443  325002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.480843  325002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 25 18:34 /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.480925  325002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91122.pem
	I1225 19:05:02.515635  325002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1225 19:05:02.523770  325002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/91122.pem /etc/ssl/certs/3ec20f2e.0
	I1225 19:05:02.532069  325002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1225 19:05:02.535588  325002 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1225 19:05:02.535641  325002 kubeadm.go:401] StartCluster: {Name:calico-910464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-910464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 19:05:02.535722  325002 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1225 19:05:02.535770  325002 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1225 19:05:02.563562  325002 cri.go:96] found id: ""
	I1225 19:05:02.563626  325002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1225 19:05:02.571750  325002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1225 19:05:02.579570  325002 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1225 19:05:02.579625  325002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1225 19:05:02.588271  325002 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1225 19:05:02.588298  325002 kubeadm.go:158] found existing configuration files:
	
	I1225 19:05:02.588345  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1225 19:05:02.596007  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1225 19:05:02.596059  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1225 19:05:02.604131  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1225 19:05:02.612070  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1225 19:05:02.612126  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1225 19:05:02.620108  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1225 19:05:02.627999  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1225 19:05:02.628086  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1225 19:05:02.635387  325002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1225 19:05:02.643359  325002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1225 19:05:02.643435  325002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1225 19:05:02.651154  325002 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1225 19:05:02.693479  325002 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1225 19:05:02.693570  325002 kubeadm.go:319] [preflight] Running pre-flight checks
	I1225 19:05:02.715933  325002 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1225 19:05:02.716012  325002 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1225 19:05:02.716059  325002 kubeadm.go:319] OS: Linux
	I1225 19:05:02.716114  325002 kubeadm.go:319] CGROUPS_CPU: enabled
	I1225 19:05:02.716179  325002 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1225 19:05:02.716282  325002 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1225 19:05:02.716378  325002 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1225 19:05:02.716450  325002 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1225 19:05:02.716512  325002 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1225 19:05:02.716589  325002 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1225 19:05:02.716652  325002 kubeadm.go:319] CGROUPS_IO: enabled
	I1225 19:05:02.780358  325002 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1225 19:05:02.780521  325002 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1225 19:05:02.780670  325002 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1225 19:05:02.788372  325002 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1225 19:05:02.790412  325002 out.go:252]   - Generating certificates and keys ...
	I1225 19:05:02.790512  325002 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1225 19:05:02.790721  325002 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1225 19:05:02.947659  325002 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1225 19:04:58.656220  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	W1225 19:05:01.155428  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	I1225 19:05:02.994997  325002 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1225 19:05:03.412611  325002 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1225 19:05:03.699499  325002 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1225 19:05:03.827390  325002 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1225 19:05:03.827584  325002 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-910464 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1225 19:05:04.081986  325002 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1225 19:05:04.082169  325002 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-910464 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1225 19:05:04.525525  325002 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1225 19:05:04.774481  325002 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1225 19:05:05.330116  325002 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1225 19:05:05.330210  325002 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1225 19:05:05.433744  325002 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1225 19:05:05.604511  325002 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1225 19:05:05.673164  325002 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1225 19:05:05.788680  325002 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1225 19:05:06.229122  325002 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1225 19:05:06.229714  325002 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1225 19:05:06.233192  325002 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1225 19:05:02.624081  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:05:02.624106  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:05:02.638134  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:05:02.638164  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:02.676056  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:05:02.676084  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:02.703746  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:05:02.703770  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:05:02.764512  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:05:02.764529  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:05:02.764540  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:02.798735  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:05:02.798781  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:02.832062  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:05:02.832088  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:02.860169  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:05:02.860199  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:02.888930  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:05:02.888954  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:05.421456  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:05:05.421907  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:05:05.421962  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:05:05.422013  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:05:05.449968  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:05.449993  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:05.449999  260034 cri.go:96] found id: ""
	I1225 19:05:05.450008  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:05:05.450073  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.454102  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.458255  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:05:05.458313  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:05:05.487016  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:05.487039  260034 cri.go:96] found id: ""
	I1225 19:05:05.487047  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:05:05.487101  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.490933  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:05:05.491015  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:05:05.517387  260034 cri.go:96] found id: ""
	I1225 19:05:05.517414  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.517425  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:05:05.517432  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:05:05.517489  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:05:05.543076  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:05.543100  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:05.543106  260034 cri.go:96] found id: ""
	I1225 19:05:05.543114  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:05:05.543168  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.546886  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.550425  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:05:05.550481  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:05:05.578265  260034 cri.go:96] found id: ""
	I1225 19:05:05.578288  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.578299  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:05:05.578305  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:05:05.578355  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:05:05.607428  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:05.607451  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:05.607457  260034 cri.go:96] found id: ""
	I1225 19:05:05.607466  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:05:05.607524  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.611800  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:05.616781  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:05:05.616839  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:05:05.645135  260034 cri.go:96] found id: ""
	I1225 19:05:05.645161  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.645172  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:05:05.645179  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:05:05.645233  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:05:05.673164  260034 cri.go:96] found id: ""
	I1225 19:05:05.673191  260034 logs.go:282] 0 containers: []
	W1225 19:05:05.673202  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:05:05.673212  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:05:05.673226  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:05.701077  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:05:05.701102  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:05.728588  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:05:05.728616  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:05:05.761067  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:05:05.761092  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:05:05.816495  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:05:05.816516  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:05:05.816530  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:05.851176  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:05:05.851203  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:05.877591  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:05:05.877615  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:05.908077  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:05:05.908102  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:05:05.975459  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:05:05.975500  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:05:06.065372  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:05:06.065407  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:05:06.079391  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:05:06.079424  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:06.109804  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:05:06.109832  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:06.234754  325002 out.go:252]   - Booting up control plane ...
	I1225 19:05:06.234871  325002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1225 19:05:06.234995  325002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1225 19:05:06.235643  325002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1225 19:05:06.261721  325002 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1225 19:05:06.261856  325002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1225 19:05:06.268256  325002 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1225 19:05:06.268572  325002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1225 19:05:06.268617  325002 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1225 19:05:06.372958  325002 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1225 19:05:06.373126  325002 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1225 19:05:07.373907  325002 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001074188s
	I1225 19:05:07.378656  325002 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1225 19:05:07.378807  325002 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1225 19:05:07.378978  325002 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1225 19:05:07.379091  325002 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1225 19:05:03.654834  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	W1225 19:05:05.655169  316482 node_ready.go:57] node "kindnet-910464" has "Ready":"False" status (will retry)
	I1225 19:05:06.654182  316482 node_ready.go:49] node "kindnet-910464" is "Ready"
	I1225 19:05:06.654221  316482 node_ready.go:38] duration metric: took 12.002570095s for node "kindnet-910464" to be "Ready" ...
	I1225 19:05:06.654234  316482 api_server.go:52] waiting for apiserver process to appear ...
	I1225 19:05:06.654281  316482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 19:05:06.668405  316482 api_server.go:72] duration metric: took 12.362051478s to wait for apiserver process to appear ...
	I1225 19:05:06.668436  316482 api_server.go:88] waiting for apiserver healthz status ...
	I1225 19:05:06.668456  316482 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1225 19:05:06.673688  316482 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1225 19:05:06.674775  316482 api_server.go:141] control plane version: v1.34.3
	I1225 19:05:06.674809  316482 api_server.go:131] duration metric: took 6.366234ms to wait for apiserver health ...
	I1225 19:05:06.674820  316482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1225 19:05:06.678651  316482 system_pods.go:59] 8 kube-system pods found
	I1225 19:05:06.678688  316482 system_pods.go:61] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:05:06.678696  316482 system_pods.go:61] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:06.678709  316482 system_pods.go:61] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:06.678715  316482 system_pods.go:61] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:06.678723  316482 system_pods.go:61] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:06.678729  316482 system_pods.go:61] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:06.678733  316482 system_pods.go:61] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:06.678741  316482 system_pods.go:61] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:05:06.678753  316482 system_pods.go:74] duration metric: took 3.926483ms to wait for pod list to return data ...
	I1225 19:05:06.678766  316482 default_sa.go:34] waiting for default service account to be created ...
	I1225 19:05:06.681244  316482 default_sa.go:45] found service account: "default"
	I1225 19:05:06.681262  316482 default_sa.go:55] duration metric: took 2.48959ms for default service account to be created ...
	I1225 19:05:06.681271  316482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1225 19:05:06.684492  316482 system_pods.go:86] 8 kube-system pods found
	I1225 19:05:06.684517  316482 system_pods.go:89] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:05:06.684524  316482 system_pods.go:89] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:06.684534  316482 system_pods.go:89] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:06.684540  316482 system_pods.go:89] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:06.684556  316482 system_pods.go:89] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:06.684566  316482 system_pods.go:89] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:06.684571  316482 system_pods.go:89] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:06.684583  316482 system_pods.go:89] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:05:06.684607  316482 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1225 19:05:06.957549  316482 system_pods.go:86] 8 kube-system pods found
	I1225 19:05:06.957581  316482 system_pods.go:89] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1225 19:05:06.957587  316482 system_pods.go:89] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:06.957593  316482 system_pods.go:89] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:06.957596  316482 system_pods.go:89] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:06.957600  316482 system_pods.go:89] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:06.957604  316482 system_pods.go:89] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:06.957607  316482 system_pods.go:89] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:06.957611  316482 system_pods.go:89] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1225 19:05:07.308317  316482 system_pods.go:86] 8 kube-system pods found
	I1225 19:05:07.308359  316482 system_pods.go:89] "coredns-66bc5c9577-f9kkb" [eae21b9f-a818-410a-9cd2-b5f964df0348] Running
	I1225 19:05:07.308366  316482 system_pods.go:89] "etcd-kindnet-910464" [1cd88165-f166-41bc-8f21-ce12c03a55fe] Running
	I1225 19:05:07.308370  316482 system_pods.go:89] "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
	I1225 19:05:07.308373  316482 system_pods.go:89] "kube-apiserver-kindnet-910464" [a679fdae-bfcd-4481-9ec8-e6d0961b64b7] Running
	I1225 19:05:07.308377  316482 system_pods.go:89] "kube-controller-manager-kindnet-910464" [b2d7b97e-8cee-4f1a-867f-a1b17d97ec6f] Running
	I1225 19:05:07.308388  316482 system_pods.go:89] "kube-proxy-xd9t4" [0b2b72d2-1e3d-4263-bd67-3a29efbe0ec4] Running
	I1225 19:05:07.308394  316482 system_pods.go:89] "kube-scheduler-kindnet-910464" [b15885a9-575e-492d-9815-a087c66b53db] Running
	I1225 19:05:07.308402  316482 system_pods.go:89] "storage-provisioner" [05db71f5-eb7a-45d1-a812-37bfa41aef72] Running
	I1225 19:05:07.308413  316482 system_pods.go:126] duration metric: took 627.134993ms to wait for k8s-apps to be running ...
	I1225 19:05:07.308426  316482 system_svc.go:44] waiting for kubelet service to be running ....
	I1225 19:05:07.308486  316482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 19:05:07.322838  316482 system_svc.go:56] duration metric: took 14.400665ms WaitForService to wait for kubelet
	I1225 19:05:07.322879  316482 kubeadm.go:587] duration metric: took 13.016528928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1225 19:05:07.322920  316482 node_conditions.go:102] verifying NodePressure condition ...
	I1225 19:05:07.325830  316482 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1225 19:05:07.325860  316482 node_conditions.go:123] node cpu capacity is 8
	I1225 19:05:07.325878  316482 node_conditions.go:105] duration metric: took 2.952951ms to run NodePressure ...
	I1225 19:05:07.325975  316482 start.go:242] waiting for startup goroutines ...
	I1225 19:05:07.325993  316482 start.go:247] waiting for cluster config update ...
	I1225 19:05:07.326020  316482 start.go:256] writing updated cluster config ...
	I1225 19:05:07.326321  316482 ssh_runner.go:195] Run: rm -f paused
	I1225 19:05:07.331140  316482 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:05:07.408468  316482 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f9kkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.414171  316482 pod_ready.go:94] pod "coredns-66bc5c9577-f9kkb" is "Ready"
	I1225 19:05:07.414198  316482 pod_ready.go:86] duration metric: took 5.709581ms for pod "coredns-66bc5c9577-f9kkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.416996  316482 pod_ready.go:83] waiting for pod "etcd-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.422376  316482 pod_ready.go:94] pod "etcd-kindnet-910464" is "Ready"
	I1225 19:05:07.422403  316482 pod_ready.go:86] duration metric: took 5.382037ms for pod "etcd-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.425728  316482 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.430640  316482 pod_ready.go:94] pod "kube-apiserver-kindnet-910464" is "Ready"
	I1225 19:05:07.430666  316482 pod_ready.go:86] duration metric: took 4.912231ms for pod "kube-apiserver-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.432824  316482 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.735441  316482 pod_ready.go:94] pod "kube-controller-manager-kindnet-910464" is "Ready"
	I1225 19:05:07.735472  316482 pod_ready.go:86] duration metric: took 302.626998ms for pod "kube-controller-manager-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:07.936111  316482 pod_ready.go:83] waiting for pod "kube-proxy-xd9t4" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.335709  316482 pod_ready.go:94] pod "kube-proxy-xd9t4" is "Ready"
	I1225 19:05:08.335736  316482 pod_ready.go:86] duration metric: took 399.595303ms for pod "kube-proxy-xd9t4" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.536161  316482 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.935954  316482 pod_ready.go:94] pod "kube-scheduler-kindnet-910464" is "Ready"
	I1225 19:05:08.935979  316482 pod_ready.go:86] duration metric: took 399.794429ms for pod "kube-scheduler-kindnet-910464" in "kube-system" namespace to be "Ready" or be gone ...
	I1225 19:05:08.935991  316482 pod_ready.go:40] duration metric: took 1.604819617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1225 19:05:08.995638  316482 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1225 19:05:08.997529  316482 out.go:179] * Done! kubectl is now configured to use "kindnet-910464" cluster and "default" namespace by default
	I1225 19:05:09.186587  325002 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.808154935s
	I1225 19:05:09.491937  325002 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.113765437s
	I1225 19:05:11.380765  325002 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002414344s
	I1225 19:05:11.399303  325002 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1225 19:05:11.413223  325002 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1225 19:05:11.424642  325002 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1225 19:05:11.424981  325002 kubeadm.go:319] [mark-control-plane] Marking the node calico-910464 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1225 19:05:11.433373  325002 kubeadm.go:319] [bootstrap-token] Using token: l3otb8.dp8zrsjgr44c03sh
	I1225 19:05:08.645960  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:05:08.646384  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:05:08.646444  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:05:08.646499  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:05:08.679827  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:08.679849  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:08.679855  260034 cri.go:96] found id: ""
	I1225 19:05:08.679864  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:05:08.679945  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.685106  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.694404  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:05:08.694482  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:05:08.726877  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:08.726920  260034 cri.go:96] found id: ""
	I1225 19:05:08.726929  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:05:08.726989  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.732051  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:05:08.732118  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:05:08.766433  260034 cri.go:96] found id: ""
	I1225 19:05:08.766468  260034 logs.go:282] 0 containers: []
	W1225 19:05:08.766479  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:05:08.766513  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:05:08.766573  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:05:08.802769  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:08.802793  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:08.802800  260034 cri.go:96] found id: ""
	I1225 19:05:08.802828  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:05:08.802886  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.807955  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.812197  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:05:08.812263  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:05:08.846128  260034 cri.go:96] found id: ""
	I1225 19:05:08.846163  260034 logs.go:282] 0 containers: []
	W1225 19:05:08.846175  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:05:08.846209  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:05:08.846279  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:05:08.879200  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:08.879248  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:08.879253  260034 cri.go:96] found id: ""
	I1225 19:05:08.879263  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:05:08.879329  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.885712  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:08.891345  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:05:08.891409  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:05:08.930337  260034 cri.go:96] found id: ""
	I1225 19:05:08.930358  260034 logs.go:282] 0 containers: []
	W1225 19:05:08.930367  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:05:08.930374  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:05:08.930426  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:05:08.969451  260034 cri.go:96] found id: ""
	I1225 19:05:08.969477  260034 logs.go:282] 0 containers: []
	W1225 19:05:08.969489  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:05:08.969499  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:05:08.969514  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:09.009162  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:05:09.009196  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:09.050080  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:05:09.050109  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:09.084968  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:05:09.084992  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:09.114561  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:05:09.114587  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:09.154277  260034 logs.go:123] Gathering logs for kube-apiserver [e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c] ...
	I1225 19:05:09.154307  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:09.194556  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:05:09.194591  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:09.229130  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:05:09.229173  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:05:09.298067  260034 logs.go:123] Gathering logs for container status ...
	I1225 19:05:09.298105  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1225 19:05:09.351490  260034 logs.go:123] Gathering logs for kubelet ...
	I1225 19:05:09.351549  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1225 19:05:09.484756  260034 logs.go:123] Gathering logs for dmesg ...
	I1225 19:05:09.484830  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1225 19:05:09.500354  260034 logs.go:123] Gathering logs for describe nodes ...
	I1225 19:05:09.500376  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1225 19:05:09.555377  260034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1225 19:05:12.056033  260034 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1225 19:05:12.056476  260034 api_server.go:315] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1225 19:05:12.056532  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1225 19:05:12.056590  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1225 19:05:12.087681  260034 cri.go:96] found id: "c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:12.087706  260034 cri.go:96] found id: "e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c"
	I1225 19:05:12.087712  260034 cri.go:96] found id: ""
	I1225 19:05:12.087721  260034 logs.go:282] 2 containers: [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036 e3d5802d1e3d9285d4b0925e9f5391b90305352a65abbab3d6461e977e07719c]
	I1225 19:05:12.087783  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.092030  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.095768  260034 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1225 19:05:12.095839  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1225 19:05:12.124924  260034 cri.go:96] found id: "b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:12.124949  260034 cri.go:96] found id: ""
	I1225 19:05:12.124960  260034 logs.go:282] 1 containers: [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508]
	I1225 19:05:12.125022  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.129406  260034 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1225 19:05:12.129474  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1225 19:05:12.164547  260034 cri.go:96] found id: ""
	I1225 19:05:12.164575  260034 logs.go:282] 0 containers: []
	W1225 19:05:12.164585  260034 logs.go:284] No container was found matching "coredns"
	I1225 19:05:12.164591  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1225 19:05:12.164644  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1225 19:05:12.199246  260034 cri.go:96] found id: "5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:12.199279  260034 cri.go:96] found id: "47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:12.199286  260034 cri.go:96] found id: ""
	I1225 19:05:12.199295  260034 logs.go:282] 2 containers: [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782]
	I1225 19:05:12.199357  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.203670  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.208716  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1225 19:05:12.208790  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1225 19:05:12.239781  260034 cri.go:96] found id: ""
	I1225 19:05:12.239802  260034 logs.go:282] 0 containers: []
	W1225 19:05:12.239810  260034 logs.go:284] No container was found matching "kube-proxy"
	I1225 19:05:12.239815  260034 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1225 19:05:12.239864  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1225 19:05:12.269829  260034 cri.go:96] found id: "d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:12.269848  260034 cri.go:96] found id: "33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:12.269855  260034 cri.go:96] found id: ""
	I1225 19:05:12.269861  260034 logs.go:282] 2 containers: [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338]
	I1225 19:05:12.269957  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.275475  260034 ssh_runner.go:195] Run: which crictl
	I1225 19:05:12.279889  260034 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1225 19:05:12.279973  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1225 19:05:12.307969  260034 cri.go:96] found id: ""
	I1225 19:05:12.307996  260034 logs.go:282] 0 containers: []
	W1225 19:05:12.308008  260034 logs.go:284] No container was found matching "kindnet"
	I1225 19:05:12.308015  260034 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1225 19:05:12.308082  260034 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1225 19:05:12.350009  260034 cri.go:96] found id: ""
	I1225 19:05:12.350148  260034 logs.go:282] 0 containers: []
	W1225 19:05:12.350164  260034 logs.go:284] No container was found matching "storage-provisioner"
	I1225 19:05:12.350175  260034 logs.go:123] Gathering logs for kube-apiserver [c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036] ...
	I1225 19:05:12.350189  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c7e7a74e4da2eb0a24231c0da6fca5c4d2624c5eff5fd59ea857dba6d0787036"
	I1225 19:05:12.392219  260034 logs.go:123] Gathering logs for etcd [b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508] ...
	I1225 19:05:12.392260  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b9e20d208a63ece66f4b7d503d5220ba92810f69d6963dee30a8cf70eeec5508"
	I1225 19:05:12.430767  260034 logs.go:123] Gathering logs for kube-scheduler [5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b] ...
	I1225 19:05:12.430794  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5908cdc560e3221d929c96c779b2e73ece21f96acb63b3adbf448cc7de791f2b"
	I1225 19:05:12.460265  260034 logs.go:123] Gathering logs for kube-scheduler [47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782] ...
	I1225 19:05:12.460290  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 47a1daa4bde269c4add70fd957319c2bd011e78566a1187d9a76b78fb4005782"
	I1225 19:05:12.489418  260034 logs.go:123] Gathering logs for kube-controller-manager [d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb] ...
	I1225 19:05:12.489458  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d200870ef180511332d71d52b930a74dc0fbdb1935fecb1c39424f46f42d10fb"
	I1225 19:05:12.518652  260034 logs.go:123] Gathering logs for kube-controller-manager [33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338] ...
	I1225 19:05:12.518681  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 33343dda71f9e4a85025812aa89a625b9ed4aacd8cf40e537a040b7a352c6338"
	I1225 19:05:12.547281  260034 logs.go:123] Gathering logs for CRI-O ...
	I1225 19:05:12.547305  260034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1225 19:05:11.434801  325002 out.go:252]   - Configuring RBAC rules ...
	I1225 19:05:11.434979  325002 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1225 19:05:11.438292  325002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1225 19:05:11.444847  325002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1225 19:05:11.447279  325002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1225 19:05:11.449852  325002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1225 19:05:11.452172  325002 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1225 19:05:11.787073  325002 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1225 19:05:12.207832  325002 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1225 19:05:12.787437  325002 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1225 19:05:12.788543  325002 kubeadm.go:319] 
	I1225 19:05:12.788654  325002 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1225 19:05:12.788684  325002 kubeadm.go:319] 
	I1225 19:05:12.788805  325002 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1225 19:05:12.788820  325002 kubeadm.go:319] 
	I1225 19:05:12.788869  325002 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1225 19:05:12.788999  325002 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1225 19:05:12.789085  325002 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1225 19:05:12.789098  325002 kubeadm.go:319] 
	I1225 19:05:12.789168  325002 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1225 19:05:12.789182  325002 kubeadm.go:319] 
	I1225 19:05:12.789253  325002 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1225 19:05:12.789265  325002 kubeadm.go:319] 
	I1225 19:05:12.789333  325002 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1225 19:05:12.789437  325002 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1225 19:05:12.789550  325002 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1225 19:05:12.789562  325002 kubeadm.go:319] 
	I1225 19:05:12.789700  325002 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1225 19:05:12.789825  325002 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1225 19:05:12.789839  325002 kubeadm.go:319] 
	I1225 19:05:12.789975  325002 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token l3otb8.dp8zrsjgr44c03sh \
	I1225 19:05:12.790127  325002 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 \
	I1225 19:05:12.790162  325002 kubeadm.go:319] 	--control-plane 
	I1225 19:05:12.790171  325002 kubeadm.go:319] 
	I1225 19:05:12.790309  325002 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1225 19:05:12.790323  325002 kubeadm.go:319] 
	I1225 19:05:12.790435  325002 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token l3otb8.dp8zrsjgr44c03sh \
	I1225 19:05:12.790560  325002 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0fa81e5b6cf900085d4303938dc22eec97b7b2affd914cb977b5ad4f033ddf10 
	I1225 19:05:12.793905  325002 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1225 19:05:12.794075  325002 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1225 19:05:12.794116  325002 cni.go:84] Creating CNI manager for "calico"
	I1225 19:05:12.795869  325002 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1225 19:05:12.797258  325002 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1225 19:05:12.797280  325002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1225 19:05:12.813728  325002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	
	
	==> CRI-O <==
	Dec 25 19:04:31 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:31.858930873Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 25 19:04:31 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:31.863468215Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 25 19:04:31 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:31.863493135Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.98294549Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0d8ae04a-b627-4126-b021-5dee5acaf8b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.983937595Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c7cabc0-c3d9-4647-adf4-25db9d98d3d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.984921427Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper" id=f0520f2c-98e7-46de-96d8-2d78549af1e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.985053716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.992409317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:49 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:49.993066498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.022634972Z" level=info msg="Created container 14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper" id=f0520f2c-98e7-46de-96d8-2d78549af1e6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.023315184Z" level=info msg="Starting container: 14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b" id=de845614-fadb-4c4d-bb2a-93156ccfdefd name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.02542031Z" level=info msg="Started container" PID=1773 containerID=14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper id=de845614-fadb-4c4d-bb2a-93156ccfdefd name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c71f75e9ba768a31e50c04d7264137071a6fdc51a04829ee5f6edd298136368
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.120261478Z" level=info msg="Removing container: 7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967" id=000546b9-cc1b-4715-81b6-dd583d66c824 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:04:50 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:50.13159481Z" level=info msg="Removed container 7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq/dashboard-metrics-scraper" id=000546b9-cc1b-4715-81b6-dd583d66c824 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.127459293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=de115b99-486d-47ac-ad6b-c0eadf05bd4f name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.128438679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b7af1123-cde0-403e-85bf-b0dceb45cb80 name=/runtime.v1.ImageService/ImageStatus
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.12948955Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6c4c598c-21a2-4b19-82c2-98caa6d81180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.129658752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134250978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134445288Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c47f9b5d14ed478d0ec191434d9b6cbe000d036a33ad3c4cf87b48b046b61fc5/merged/etc/passwd: no such file or directory"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134480243Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c47f9b5d14ed478d0ec191434d9b6cbe000d036a33ad3c4cf87b48b046b61fc5/merged/etc/group: no such file or directory"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.134770535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.159042705Z" level=info msg="Created container 5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb: kube-system/storage-provisioner/storage-provisioner" id=6c4c598c-21a2-4b19-82c2-98caa6d81180 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.159650813Z" level=info msg="Starting container: 5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb" id=5c409d8a-2413-4aaa-b2cb-19a709075074 name=/runtime.v1.RuntimeService/StartContainer
	Dec 25 19:04:52 default-k8s-diff-port-960022 crio[570]: time="2025-12-25T19:04:52.163468283Z" level=info msg="Started container" PID=1787 containerID=5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb description=kube-system/storage-provisioner/storage-provisioner id=5c409d8a-2413-4aaa-b2cb-19a709075074 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4597b0b7c5d1031b163b16c345ed41795d846297c62fdd6ada00ab9be2830ac
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	5649c03c0aa63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   a4597b0b7c5d1       storage-provisioner                                    kube-system
	14c27e56e2876       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   0c71f75e9ba76       dashboard-metrics-scraper-6ffb444bf9-fphlq             kubernetes-dashboard
	d0ee12735cd4d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   486340be868d4       kubernetes-dashboard-855c9754f9-hm5lx                  kubernetes-dashboard
	fdbf81a94147e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   e951b8833a218       coredns-66bc5c9577-c9wmz                               kube-system
	1ee01c76421d4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   27464547da776       busybox                                                default
	f2ca16d825df4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   a4597b0b7c5d1       storage-provisioner                                    kube-system
	3aa3159c3178d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   dae936be19434       kindnet-hj6rr                                          kube-system
	132f0bde2b6bf       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           53 seconds ago      Running             kube-proxy                  0                   09ccaea963719       kube-proxy-wl784                                       kube-system
	deb534fd994d4       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           56 seconds ago      Running             kube-apiserver              0                   db3f2f5486cb2       kube-apiserver-default-k8s-diff-port-960022            kube-system
	d7afd3e6efe6f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   402df0c317d41       etcd-default-k8s-diff-port-960022                      kube-system
	e331a83a17cd9       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           56 seconds ago      Running             kube-controller-manager     0                   481506cdc0bf4       kube-controller-manager-default-k8s-diff-port-960022   kube-system
	354a51e629671       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           56 seconds ago      Running             kube-scheduler              0                   13ad13bab9fc4       kube-scheduler-default-k8s-diff-port-960022            kube-system
	
	
	==> coredns [fdbf81a94147e6e035a27f9d8d605db6a96cbbbddbd65b9f768e335d836bedb5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35806 - 63954 "HINFO IN 8877040098496447306.5506639103965423215. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01950143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-960022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-960022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65b0339f3ab6fa9cf527eb915d9288ef7a9c7fef
	                    minikube.k8s.io/name=default-k8s-diff-port-960022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_25T19_03_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Dec 2025 19:03:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-960022
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Dec 2025 19:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Dec 2025 19:04:51 +0000   Thu, 25 Dec 2025 19:03:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-960022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                66f57d40-b312-40d1-9a39-442700171c0b
	  Boot ID:                    665c5054-bd76-444c-ba4d-23c4edde1464
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-c9wmz                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-960022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-hj6rr                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-960022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-960022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-wl784                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-960022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fphlq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hm5lx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-960022 event: Registered Node default-k8s-diff-port-960022 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-960022 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-960022 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-960022 event: Registered Node default-k8s-diff-port-960022 in Controller
	
	
	==> dmesg <==
	[Dec25 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001703] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.391152] i8042: Warning: Keylock active
	[  +0.010665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.485479] block sda: the capability attribute has been deprecated.
	[  +0.079658] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024208] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.790329] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d7afd3e6efe6f106fd792404c924d54e7a199c5c88a6c82664ffa1c729eee3ee] <==
	{"level":"warn","ts":"2025-12-25T19:04:19.667036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.675073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.682935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.692077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.702220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.709275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.716597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.724125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.730218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.737016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.743780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.750189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.757157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.764733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.771140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.777622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.785167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.791698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.798936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.810458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.816822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.823680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-25T19:04:19.871357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58984","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-25T19:04:31.821468Z","caller":"traceutil/trace.go:172","msg":"trace[744845134] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"124.114112ms","start":"2025-12-25T19:04:31.697334Z","end":"2025-12-25T19:04:31.821448Z","steps":["trace[744845134] 'process raft request'  (duration: 123.971304ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-25T19:04:57.196599Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.642023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790924616199884 > lease_revoke:<id:40899b56e5c96e2e>","response":"size:28"}
	
	
	==> kernel <==
	 19:05:14 up 47 min,  0 user,  load average: 4.01, 3.02, 2.07
	Linux default-k8s-diff-port-960022 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3aa3159c3178dba42f58b963940a73d87ed0b361760a6b4cda22ce96594b70b9] <==
	I1225 19:04:21.599473       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1225 19:04:21.599743       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1225 19:04:21.599883       1 main.go:148] setting mtu 1500 for CNI 
	I1225 19:04:21.599932       1 main.go:178] kindnetd IP family: "ipv4"
	I1225 19:04:21.599951       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-25T19:04:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1225 19:04:21.843093       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1225 19:04:21.843158       1 controller.go:381] "Waiting for informer caches to sync"
	I1225 19:04:21.843177       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1225 19:04:21.843341       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1225 19:04:22.344342       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1225 19:04:22.344379       1 metrics.go:72] Registering metrics
	I1225 19:04:22.396504       1 controller.go:711] "Syncing nftables rules"
	I1225 19:04:31.802078       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:04:31.802145       1 main.go:301] handling current node
	I1225 19:04:41.803514       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:04:41.803591       1 main.go:301] handling current node
	I1225 19:04:51.802453       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:04:51.802491       1 main.go:301] handling current node
	I1225 19:05:01.802030       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:05:01.802078       1 main.go:301] handling current node
	I1225 19:05:11.811035       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1225 19:05:11.811074       1 main.go:301] handling current node
	
	
	==> kube-apiserver [deb534fd994d4a2ae1235cd069ddaa760e1a5e6170fbf9a1ea236267d7a7dbf3] <==
	I1225 19:04:20.376108       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1225 19:04:20.375829       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1225 19:04:20.379878       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1225 19:04:20.375863       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1225 19:04:20.376004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1225 19:04:20.376177       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1225 19:04:20.378754       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1225 19:04:20.378779       1 aggregator.go:171] initial CRD sync complete...
	I1225 19:04:20.385491       1 autoregister_controller.go:144] Starting autoregister controller
	I1225 19:04:20.385501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1225 19:04:20.385508       1 cache.go:39] Caches are synced for autoregister controller
	I1225 19:04:20.390795       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1225 19:04:20.429503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1225 19:04:20.446720       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1225 19:04:20.724949       1 controller.go:667] quota admission added evaluator for: namespaces
	I1225 19:04:20.753453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1225 19:04:20.774035       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1225 19:04:20.784207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1225 19:04:20.790434       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1225 19:04:20.824641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.107.182"}
	I1225 19:04:20.835251       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.42.14"}
	I1225 19:04:21.279490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1225 19:04:23.759997       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1225 19:04:24.208668       1 controller.go:667] quota admission added evaluator for: endpoints
	I1225 19:04:24.360755       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e331a83a17cd96725879adde3c8dabff77823d5c1af59510c5a9822f15b9601d] <==
	I1225 19:04:23.727961       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1225 19:04:23.730318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1225 19:04:23.730491       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1225 19:04:23.730605       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1225 19:04:23.731517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1225 19:04:23.734091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1225 19:04:23.735306       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1225 19:04:23.736528       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1225 19:04:23.738684       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1225 19:04:23.741014       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1225 19:04:23.754393       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1225 19:04:23.754432       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1225 19:04:23.754519       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1225 19:04:23.754525       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1225 19:04:23.754606       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1225 19:04:23.754679       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1225 19:04:23.754691       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-960022"
	I1225 19:04:23.754801       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1225 19:04:23.754852       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1225 19:04:23.754954       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1225 19:04:23.755051       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1225 19:04:23.757311       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1225 19:04:23.758686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1225 19:04:23.761114       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1225 19:04:23.790978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [132f0bde2b6bf2854770419c66dbc956a1f62dbc7f3be89c002b08f5c1f6eaa0] <==
	I1225 19:04:21.395198       1 server_linux.go:53] "Using iptables proxy"
	I1225 19:04:21.459526       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1225 19:04:21.559930       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1225 19:04:21.559979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1225 19:04:21.560080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1225 19:04:21.578741       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1225 19:04:21.578801       1 server_linux.go:132] "Using iptables Proxier"
	I1225 19:04:21.585207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1225 19:04:21.585666       1 server.go:527] "Version info" version="v1.34.3"
	I1225 19:04:21.585697       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:21.587481       1 config.go:200] "Starting service config controller"
	I1225 19:04:21.587640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1225 19:04:21.587723       1 config.go:309] "Starting node config controller"
	I1225 19:04:21.587734       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1225 19:04:21.587739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1225 19:04:21.588164       1 config.go:106] "Starting endpoint slice config controller"
	I1225 19:04:21.588175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1225 19:04:21.588189       1 config.go:403] "Starting serviceCIDR config controller"
	I1225 19:04:21.588203       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1225 19:04:21.687766       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1225 19:04:21.688931       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1225 19:04:21.688948       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [354a51e629671e49dd48aa32ce81ed41d5eaf4761e538194e03358bc1fcc7c09] <==
	I1225 19:04:19.289260       1 serving.go:386] Generated self-signed cert in-memory
	W1225 19:04:20.327056       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1225 19:04:20.327094       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1225 19:04:20.327105       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1225 19:04:20.327114       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1225 19:04:20.371017       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1225 19:04:20.371045       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1225 19:04:20.373846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:04:20.373888       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1225 19:04:20.374263       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1225 19:04:20.374348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1225 19:04:20.474940       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 25 19:04:24 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:24.275674     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhpz\" (UniqueName: \"kubernetes.io/projected/877f70b3-c96c-4876-8dbe-f0ad7d7e0a01-kube-api-access-6mhpz\") pod \"kubernetes-dashboard-855c9754f9-hm5lx\" (UID: \"877f70b3-c96c-4876-8dbe-f0ad7d7e0a01\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm5lx"
	Dec 25 19:04:24 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:24.275743     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/877f70b3-c96c-4876-8dbe-f0ad7d7e0a01-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hm5lx\" (UID: \"877f70b3-c96c-4876-8dbe-f0ad7d7e0a01\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm5lx"
	Dec 25 19:04:25 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:25.430140     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 25 19:04:28 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:28.051169     732 scope.go:117] "RemoveContainer" containerID="fe9a8db3687bc9761a621fa1ff2579fd157df8850787a27f2f0b9ed4be852715"
	Dec 25 19:04:29 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:29.056179     732 scope.go:117] "RemoveContainer" containerID="fe9a8db3687bc9761a621fa1ff2579fd157df8850787a27f2f0b9ed4be852715"
	Dec 25 19:04:29 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:29.056352     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:29 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:29.056562     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:30 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:30.059052     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:30 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:30.059272     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:31 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:31.823301     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm5lx" podStartSLOduration=1.9369101039999999 podStartE2EDuration="7.823275852s" podCreationTimestamp="2025-12-25 19:04:24 +0000 UTC" firstStartedPulling="2025-12-25 19:04:24.480595068 +0000 UTC m=+6.601216581" lastFinishedPulling="2025-12-25 19:04:30.366960831 +0000 UTC m=+12.487582329" observedRunningTime="2025-12-25 19:04:31.077271364 +0000 UTC m=+13.197892883" watchObservedRunningTime="2025-12-25 19:04:31.823275852 +0000 UTC m=+13.943897371"
	Dec 25 19:04:35 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:35.729098     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:35 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:35.729315     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:49 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:49.982483     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:50 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:50.118732     732 scope.go:117] "RemoveContainer" containerID="7de06a85103fcab5625cb5cc973880cf40f64d068132d994a54b5fbe58f7d967"
	Dec 25 19:04:50 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:50.119001     732 scope.go:117] "RemoveContainer" containerID="14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	Dec 25 19:04:50 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:50.119204     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:04:52 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:52.127013     732 scope.go:117] "RemoveContainer" containerID="f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5"
	Dec 25 19:04:55 default-k8s-diff-port-960022 kubelet[732]: I1225 19:04:55.728467     732 scope.go:117] "RemoveContainer" containerID="14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	Dec 25 19:04:55 default-k8s-diff-port-960022 kubelet[732]: E1225 19:04:55.728684     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:05:07 default-k8s-diff-port-960022 kubelet[732]: I1225 19:05:07.982543     732 scope.go:117] "RemoveContainer" containerID="14c27e56e2876104b9b97af1293ed36130d30c1c3b4118d07854fbbf7d79831b"
	Dec 25 19:05:07 default-k8s-diff-port-960022 kubelet[732]: E1225 19:05:07.982763     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fphlq_kubernetes-dashboard(b0c4f284-78d5-443d-a148-8562b8f45324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fphlq" podUID="b0c4f284-78d5-443d-a148-8562b8f45324"
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 25 19:05:09 default-k8s-diff-port-960022 systemd[1]: kubelet.service: Consumed 1.695s CPU time.
	
	
	==> kubernetes-dashboard [d0ee12735cd4db3a4f33b6c01940acfb704c79ae33d33dd565e52a63afdb2b14] <==
	2025/12/25 19:04:30 Starting overwatch
	2025/12/25 19:04:30 Using namespace: kubernetes-dashboard
	2025/12/25 19:04:30 Using in-cluster config to connect to apiserver
	2025/12/25 19:04:30 Using secret token for csrf signing
	2025/12/25 19:04:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/25 19:04:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/25 19:04:30 Successful initial request to the apiserver, version: v1.34.3
	2025/12/25 19:04:30 Generating JWE encryption key
	2025/12/25 19:04:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/25 19:04:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/25 19:04:30 Initializing JWE encryption key from synchronized object
	2025/12/25 19:04:30 Creating in-cluster Sidecar client
	2025/12/25 19:04:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/25 19:04:30 Serving insecurely on HTTP port: 9090
	2025/12/25 19:05:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5649c03c0aa633da79d3929ef429eb6a11236dda58d14ea813f653c269745beb] <==
	I1225 19:04:52.177151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1225 19:04:52.184243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1225 19:04:52.184280       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1225 19:04:52.186881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:04:55.641909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:04:59.902044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:03.500341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:06.554091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:09.576255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:09.581054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:05:09.581183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1225 19:05:09.581348       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7550cadf-4431-4746-a11e-df2346058022", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-960022_613ba5f7-7d76-4286-be22-5c4833f040bd became leader
	I1225 19:05:09.581361       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-960022_613ba5f7-7d76-4286-be22-5c4833f040bd!
	W1225 19:05:09.587195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:09.590499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1225 19:05:09.682298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-960022_613ba5f7-7d76-4286-be22-5c4833f040bd!
	W1225 19:05:11.593432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:11.596888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:13.600515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1225 19:05:13.607613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f2ca16d825df4a18996b07e424ec1ab2fbf76ac12170d34c7de8ec692f2addc5] <==
	I1225 19:04:21.362187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1225 19:04:51.364657       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022: exit status 2 (344.638946ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
I1225 19:05:15.347957    9112 config.go:182] Loaded profile config "kindnet-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.61s)

                                                
                                    

Test pass (359/419)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 3.1
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.22
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 2.63
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.38
30 TestBinaryMirror 0.82
31 TestOffline 55.67
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 94.72
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/serial/GCPAuth/FakeCredentials 8.41
57 TestAddons/StoppedEnableDisable 18.97
58 TestCertOptions 25.81
59 TestCertExpiration 216.81
61 TestForceSystemdFlag 23.06
62 TestForceSystemdEnv 25.61
67 TestErrorSpam/setup 18.55
68 TestErrorSpam/start 0.64
69 TestErrorSpam/status 0.93
70 TestErrorSpam/pause 6.38
71 TestErrorSpam/unpause 4.91
72 TestErrorSpam/stop 8.09
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.88
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.98
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.44
84 TestFunctional/serial/CacheCmd/cache/add_local 0.87
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 66.82
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.17
95 TestFunctional/serial/LogsFileCmd 1.19
96 TestFunctional/serial/InvalidService 4.02
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 5.3
100 TestFunctional/parallel/DryRun 0.38
101 TestFunctional/parallel/InternationalLanguage 0.26
102 TestFunctional/parallel/StatusCmd 1.1
106 TestFunctional/parallel/ServiceCmdConnect 6.71
107 TestFunctional/parallel/AddonsCmd 0.29
108 TestFunctional/parallel/PersistentVolumeClaim 20.65
110 TestFunctional/parallel/SSHCmd 0.63
111 TestFunctional/parallel/CpCmd 2.25
112 TestFunctional/parallel/MySQL 23.58
113 TestFunctional/parallel/FileSync 0.32
114 TestFunctional/parallel/CertSync 1.95
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
122 TestFunctional/parallel/License 0.27
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
127 TestFunctional/parallel/Version/short 0.06
128 TestFunctional/parallel/Version/components 0.53
129 TestFunctional/parallel/ProfileCmd/profile_list 0.49
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
135 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
136 TestFunctional/parallel/ImageCommands/Setup 0.41
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.65
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.09
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.33
144 TestFunctional/parallel/MountCmd/any-port 11.33
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
150 TestFunctional/parallel/MountCmd/specific-port 1.75
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
158 TestFunctional/parallel/ServiceCmd/DeployApp 8.15
159 TestFunctional/parallel/ServiceCmd/List 1.77
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.76
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
162 TestFunctional/parallel/ServiceCmd/Format 0.55
163 TestFunctional/parallel/ServiceCmd/URL 0.54
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 39.14
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.45
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 0.83
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.5
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 53.51
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.2
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.21
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.53
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.47
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 11.32
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.54
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.22
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.27
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 7.8
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 22.85
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.59
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.88
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 22.29
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.3
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.85
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.63
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.26
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.26
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.24
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.28
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.27
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.45
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.17
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.38
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.18
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.18
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.18
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 8.17
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.92
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 0.99
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 7.2
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.35
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.72
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.6
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.37
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.51
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.5
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.37
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.12
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.4
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.55
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.55
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.55
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 13.14
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.54
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.86
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.64
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 113.31
266 TestMultiControlPlane/serial/DeployApp 4.97
267 TestMultiControlPlane/serial/PingHostFromPods 1.07
268 TestMultiControlPlane/serial/AddWorkerNode 25.75
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
271 TestMultiControlPlane/serial/CopyFile 16.96
272 TestMultiControlPlane/serial/StopSecondaryNode 13.84
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.75
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.48
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.64
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
279 TestMultiControlPlane/serial/StopCluster 31.99
280 TestMultiControlPlane/serial/RestartCluster 58.5
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
282 TestMultiControlPlane/serial/AddSecondaryNode 43.62
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
288 TestJSONOutput/start/Command 42.07
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 8
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.24
313 TestKicCustomNetwork/create_custom_network 28.42
314 TestKicCustomNetwork/use_default_bridge_network 24.61
315 TestKicExistingNetwork 21.68
316 TestKicCustomSubnet 26.71
317 TestKicStaticIP 21.86
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 50.43
322 TestMountStart/serial/StartWithMountFirst 7.6
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 4.68
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.68
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.06
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 65.88
334 TestMultiNode/serial/DeployApp2Nodes 3.5
335 TestMultiNode/serial/PingHostFrom2Pods 0.73
336 TestMultiNode/serial/AddNode 26.8
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.64
339 TestMultiNode/serial/CopyFile 9.68
340 TestMultiNode/serial/StopNode 2.25
341 TestMultiNode/serial/StartAfterStop 7.02
342 TestMultiNode/serial/RestartKeepsNodes 72.25
343 TestMultiNode/serial/DeleteNode 5.25
344 TestMultiNode/serial/StopMultiNode 28.52
345 TestMultiNode/serial/RestartMultiNode 48.59
346 TestMultiNode/serial/ValidateNameConflict 22.27
353 TestScheduledStopUnix 95.35
356 TestInsufficientStorage 8.64
357 TestRunningBinaryUpgrade 294.37
359 TestKubernetesUpgrade 329.74
360 TestMissingContainerUpgrade 65.08
362 TestStoppedBinaryUpgrade/Setup 0.71
363 TestPause/serial/Start 51.77
365 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
366 TestNoKubernetes/serial/StartWithK8s 39.74
367 TestStoppedBinaryUpgrade/Upgrade 304.28
368 TestNoKubernetes/serial/StartWithStopK8s 23.73
369 TestPause/serial/SecondStartNoReconfiguration 5.9
385 TestNetworkPlugins/group/false 3.83
389 TestNoKubernetes/serial/Start 9.64
390 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
391 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
392 TestNoKubernetes/serial/ProfileList 17.42
393 TestNoKubernetes/serial/Stop 1.29
394 TestNoKubernetes/serial/StartNoArgs 6.71
395 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
396 TestPreload/Start-NoPreload-PullImage 55.97
397 TestPreload/Restart-With-Preload-Check-User-Image 51.2
399 TestStoppedBinaryUpgrade/MinikubeLogs 1
401 TestStartStop/group/old-k8s-version/serial/FirstStart 50.85
403 TestStartStop/group/no-preload/serial/FirstStart 50.21
404 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
406 TestStartStop/group/embed-certs/serial/FirstStart 41.85
408 TestStartStop/group/old-k8s-version/serial/Stop 16.14
409 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
410 TestStartStop/group/old-k8s-version/serial/SecondStart 50.47
411 TestStartStop/group/no-preload/serial/DeployApp 8.23
413 TestStartStop/group/no-preload/serial/Stop 16.72
414 TestStartStop/group/embed-certs/serial/DeployApp 7.23
416 TestStartStop/group/embed-certs/serial/Stop 18.16
417 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
418 TestStartStop/group/no-preload/serial/SecondStart 49.67
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/embed-certs/serial/SecondStart 49.22
421 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
422 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
423 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.41
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.14
427 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
428 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
429 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
431 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
432 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
434 TestStartStop/group/newest-cni/serial/FirstStart 23.51
435 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
437 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
438 TestNetworkPlugins/group/auto/Start 39.26
440 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.23
441 TestStartStop/group/newest-cni/serial/DeployApp 0
443 TestStartStop/group/newest-cni/serial/Stop 8.08
444 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
445 TestStartStop/group/newest-cni/serial/SecondStart 10.73
446 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
447 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.07
448 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
449 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
452 TestNetworkPlugins/group/auto/KubeletFlags 0.3
453 TestNetworkPlugins/group/auto/NetCatPod 9.33
454 TestNetworkPlugins/group/kindnet/Start 41.12
455 TestNetworkPlugins/group/auto/DNS 0.2
456 TestNetworkPlugins/group/auto/Localhost 0.16
457 TestNetworkPlugins/group/auto/HairPin 0.19
458 TestNetworkPlugins/group/calico/Start 50.24
459 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
460 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
461 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
463 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
464 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
465 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
466 TestNetworkPlugins/group/custom-flannel/Start 49.34
467 TestNetworkPlugins/group/kindnet/DNS 0.13
468 TestNetworkPlugins/group/kindnet/Localhost 0.11
469 TestNetworkPlugins/group/kindnet/HairPin 0.12
470 TestNetworkPlugins/group/calico/ControllerPod 6.01
471 TestNetworkPlugins/group/enable-default-cni/Start 65.49
472 TestNetworkPlugins/group/flannel/Start 50.2
473 TestNetworkPlugins/group/calico/KubeletFlags 0.38
474 TestNetworkPlugins/group/calico/NetCatPod 13.23
475 TestNetworkPlugins/group/calico/DNS 0.11
476 TestNetworkPlugins/group/calico/Localhost 0.12
477 TestNetworkPlugins/group/calico/HairPin 0.12
478 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
479 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
480 TestNetworkPlugins/group/custom-flannel/DNS 0.16
481 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
482 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
483 TestNetworkPlugins/group/bridge/Start 68.62
484 TestNetworkPlugins/group/flannel/ControllerPod 6.01
485 TestPreload/PreloadSrc/gcs 4.05
486 TestPreload/PreloadSrc/github 5.15
487 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
488 TestNetworkPlugins/group/flannel/NetCatPod 9.25
489 TestPreload/PreloadSrc/gcs-cached 0.47
490 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
491 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.17
492 TestNetworkPlugins/group/flannel/DNS 0.11
493 TestNetworkPlugins/group/flannel/Localhost 0.1
494 TestNetworkPlugins/group/flannel/HairPin 0.09
495 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
496 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
497 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
498 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
499 TestNetworkPlugins/group/bridge/NetCatPod 9.17
500 TestNetworkPlugins/group/bridge/DNS 0.1
501 TestNetworkPlugins/group/bridge/Localhost 0.08
502 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-658134 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-658134 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.360214766s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1225 18:27:56.308377    9112 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1225 18:27:56.308463    9112 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-658134
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-658134: exit status 85 (67.779538ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-658134 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-658134 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 18:27:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 18:27:51.996271    9123 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:27:51.996954    9123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:27:51.996965    9123 out.go:374] Setting ErrFile to fd 2...
	I1225 18:27:51.996971    9123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:27:51.997177    9123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	W1225 18:27:51.997321    9123 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22301-5579/.minikube/config/config.json: open /home/jenkins/minikube-integration/22301-5579/.minikube/config/config.json: no such file or directory
	I1225 18:27:51.997800    9123 out.go:368] Setting JSON to true
	I1225 18:27:51.998681    9123 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":620,"bootTime":1766686652,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:27:51.998737    9123 start.go:143] virtualization: kvm guest
	I1225 18:27:52.002563    9123 out.go:99] [download-only-658134] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1225 18:27:52.002680    9123 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball: no such file or directory
	I1225 18:27:52.002772    9123 notify.go:221] Checking for updates...
	I1225 18:27:52.003857    9123 out.go:171] MINIKUBE_LOCATION=22301
	I1225 18:27:52.005182    9123 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:27:52.006683    9123 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:27:52.007914    9123 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:27:52.009169    9123 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1225 18:27:52.011243    9123 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1225 18:27:52.011446    9123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:27:52.037012    9123 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:27:52.037114    9123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:27:52.244912    9123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-25 18:27:52.235237174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:27:52.245030    9123 docker.go:319] overlay module found
	I1225 18:27:52.246661    9123 out.go:99] Using the docker driver based on user configuration
	I1225 18:27:52.246697    9123 start.go:309] selected driver: docker
	I1225 18:27:52.246706    9123 start.go:928] validating driver "docker" against <nil>
	I1225 18:27:52.246793    9123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:27:52.298303    9123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-25 18:27:52.28980906 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:27:52.298500    9123 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 18:27:52.299033    9123 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1225 18:27:52.299200    9123 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1225 18:27:52.300915    9123 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-658134 host does not exist
	  To start a cluster, run: "minikube start -p download-only-658134"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-658134
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-964215 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-964215 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.09705608s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1225 18:27:59.835342    9112 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1225 18:27:59.835377    9112 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-964215
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-964215: exit status 85 (69.422739ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-658134 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-658134 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:27 UTC │
	│ delete  │ -p download-only-658134                                                                                                                                                   │ download-only-658134 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:27 UTC │
	│ start   │ -o=json --download-only -p download-only-964215 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-964215 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 18:27:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 18:27:56.789935    9481 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:27:56.790027    9481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:27:56.790031    9481 out.go:374] Setting ErrFile to fd 2...
	I1225 18:27:56.790035    9481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:27:56.790220    9481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:27:56.790642    9481 out.go:368] Setting JSON to true
	I1225 18:27:56.791392    9481 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":625,"bootTime":1766686652,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:27:56.791441    9481 start.go:143] virtualization: kvm guest
	I1225 18:27:56.793409    9481 out.go:99] [download-only-964215] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:27:56.793555    9481 notify.go:221] Checking for updates...
	I1225 18:27:56.794830    9481 out.go:171] MINIKUBE_LOCATION=22301
	I1225 18:27:56.796160    9481 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:27:56.797500    9481 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:27:56.798762    9481 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:27:56.800059    9481 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1225 18:27:56.802218    9481 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1225 18:27:56.802414    9481 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:27:56.824666    9481 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:27:56.824734    9481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:27:56.877988    9481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-25 18:27:56.868669699 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:27:56.878113    9481 docker.go:319] overlay module found
	I1225 18:27:56.879835    9481 out.go:99] Using the docker driver based on user configuration
	I1225 18:27:56.879868    9481 start.go:309] selected driver: docker
	I1225 18:27:56.879874    9481 start.go:928] validating driver "docker" against <nil>
	I1225 18:27:56.879974    9481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:27:56.934587    9481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-25 18:27:56.925152178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:27:56.934766    9481 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 18:27:56.935471    9481 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1225 18:27:56.935656    9481 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1225 18:27:56.937401    9481 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-964215 host does not exist
	  To start a cluster, run: "minikube start -p download-only-964215"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-964215
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (2.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-904964 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-904964 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.633221244s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (2.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1225 18:28:02.902518    9112 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1225 18:28:02.902555    9112 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-904964
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-904964: exit status 85 (65.697263ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-658134 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-658134 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:27 UTC │
	│ delete  │ -p download-only-658134                                                                                                                                                        │ download-only-658134 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:27 UTC │
	│ start   │ -o=json --download-only -p download-only-964215 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-964215 │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 25 Dec 25 18:27 UTC │ 25 Dec 25 18:28 UTC │
	│ delete  │ -p download-only-964215                                                                                                                                                        │ download-only-964215 │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │ 25 Dec 25 18:28 UTC │
	│ start   │ -o=json --download-only -p download-only-904964 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-904964 │ jenkins │ v1.37.0 │ 25 Dec 25 18:28 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/25 18:28:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1225 18:28:00.320350    9840 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:28:00.320579    9840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:28:00.320589    9840 out.go:374] Setting ErrFile to fd 2...
	I1225 18:28:00.320595    9840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:28:00.320803    9840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:28:00.321318    9840 out.go:368] Setting JSON to true
	I1225 18:28:00.322119    9840 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":628,"bootTime":1766686652,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:28:00.322166    9840 start.go:143] virtualization: kvm guest
	I1225 18:28:00.323885    9840 out.go:99] [download-only-904964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:28:00.324057    9840 notify.go:221] Checking for updates...
	I1225 18:28:00.325099    9840 out.go:171] MINIKUBE_LOCATION=22301
	I1225 18:28:00.326074    9840 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:28:00.327199    9840 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:28:00.328275    9840 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:28:00.329395    9840 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1225 18:28:00.331150    9840 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1225 18:28:00.331342    9840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:28:00.353382    9840 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:28:00.353450    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:28:00.406486    9840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-25 18:28:00.396810342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:28:00.406629    9840 docker.go:319] overlay module found
	I1225 18:28:00.408384    9840 out.go:99] Using the docker driver based on user configuration
	I1225 18:28:00.408431    9840 start.go:309] selected driver: docker
	I1225 18:28:00.408440    9840 start.go:928] validating driver "docker" against <nil>
	I1225 18:28:00.408539    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:28:00.461151    9840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-25 18:28:00.452379302 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:28:00.461352    9840 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1225 18:28:00.461851    9840 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1225 18:28:00.462012    9840 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1225 18:28:00.463596    9840 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-904964 host does not exist
	  To start a cluster, run: "minikube start -p download-only-904964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-904964
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-876757 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-876757" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-876757
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1225 18:28:04.113872    9112 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-321939 --alsologtostderr --binary-mirror http://127.0.0.1:35961 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-321939" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-321939
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (55.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-697454 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-697454 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (53.13760816s)
helpers_test.go:176: Cleaning up "offline-crio-697454" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-697454
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-697454: (2.52604122s)
--- PASS: TestOffline (55.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-335994
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-335994: exit status 85 (64.492667ms)

                                                
                                                
-- stdout --
	* Profile "addons-335994" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335994"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-335994
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-335994: exit status 85 (62.421644ms)

                                                
                                                
-- stdout --
	* Profile "addons-335994" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335994"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (94.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-335994 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-335994 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m34.723255177s)
--- PASS: TestAddons/Setup (94.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-335994 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-335994 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-335994 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-335994 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f1453947-7f99-4fd0-915d-6261fd847080] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f1453947-7f99-4fd0-915d-6261fd847080] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004098739s
addons_test.go:696: (dbg) Run:  kubectl --context addons-335994 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-335994 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-335994 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-335994
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-335994: (18.697252152s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-335994
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-335994
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-335994
--- PASS: TestAddons/StoppedEnableDisable (18.97s)

                                                
                                    
x
+
TestCertOptions (25.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-026286 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-026286 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.765975369s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-026286 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-026286 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-026286 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-026286" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-026286
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-026286: (2.397514158s)
--- PASS: TestCertOptions (25.81s)

                                                
                                    
x
+
TestCertExpiration (216.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1225 18:57:43.365071    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.872144111s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.596520301s)
helpers_test.go:176: Cleaning up "cert-expiration-002470" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-002470
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-002470: (3.341864048s)
--- PASS: TestCertExpiration (216.81s)

                                                
                                    
x
+
TestForceSystemdFlag (23.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-000275 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-000275 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.390447231s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-000275 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-000275" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-000275
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-000275: (2.399611807s)
--- PASS: TestForceSystemdFlag (23.06s)

                                                
                                    
x
+
TestForceSystemdEnv (25.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-768633 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-768633 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.164564045s)
helpers_test.go:176: Cleaning up "force-systemd-env-768633" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-768633
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-768633: (2.442529766s)
--- PASS: TestForceSystemdEnv (25.61s)

                                                
                                    
x
+
TestErrorSpam/setup (18.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-481949 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-481949 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-481949 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-481949 --driver=docker  --container-runtime=crio: (18.54856215s)
--- PASS: TestErrorSpam/setup (18.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (6.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause: exit status 80 (2.203306006s)

                                                
                                                
-- stdout --
	* Pausing node nospam-481949 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:31:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause: exit status 80 (1.999938065s)

                                                
                                                
-- stdout --
	* Pausing node nospam-481949 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause: exit status 80 (2.180302299s)

                                                
                                                
-- stdout --
	* Pausing node nospam-481949 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:31:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause: exit status 80 (2.032909857s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-481949 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause: exit status 80 (1.374008235s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-481949 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause: exit status 80 (1.497787672s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-481949 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-25T18:31:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.91s)

                                                
                                    
x
+
TestErrorSpam/stop (8.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 stop: (7.89216509s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-481949 --log_dir /tmp/nospam-481949 stop
--- PASS: TestErrorSpam/stop (8.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/test/nested/copy/9112/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984202 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-984202 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.8826333s)
--- PASS: TestFunctional/serial/StartWithProxy (39.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1225 18:32:26.345463    9112 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984202 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-984202 --alsologtostderr -v=8: (5.980189092s)
functional_test.go:678: soft start took 5.980896724s for "functional-984202" cluster.
I1225 18:32:32.326038    9112 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (5.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-984202 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-984202 /tmp/TestFunctionalserialCacheCmdcacheadd_local3155896036/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cache add minikube-local-cache-test:functional-984202
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cache delete minikube-local-cache-test:functional-984202
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-984202
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.034616ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 kubectl -- --context functional-984202 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-984202 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-984202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.823674023s)
functional_test.go:776: restart took 1m6.823815893s for "functional-984202" cluster.
I1225 18:33:44.812150    9112 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (66.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-984202 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 logs: (1.170733521s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 logs --file /tmp/TestFunctionalserialLogsFileCmd1616535836/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 logs --file /tmp/TestFunctionalserialLogsFileCmd1616535836/001/logs.txt: (1.186039757s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-984202 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-984202
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-984202: exit status 115 (345.115863ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32229 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-984202 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 config get cpus: exit status 14 (72.134682ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 config get cpus: exit status 14 (85.809459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-984202 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-984202 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 47901: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984202 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-984202 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.452842ms)

                                                
                                                
-- stdout --
	* [functional-984202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:34:15.824221   47007 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:34:15.824484   47007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:34:15.824494   47007 out.go:374] Setting ErrFile to fd 2...
	I1225 18:34:15.824500   47007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:34:15.824711   47007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:34:15.825225   47007 out.go:368] Setting JSON to false
	I1225 18:34:15.826288   47007 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1004,"bootTime":1766686652,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:34:15.826367   47007 start.go:143] virtualization: kvm guest
	I1225 18:34:15.828807   47007 out.go:179] * [functional-984202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:34:15.830383   47007 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:34:15.830412   47007 notify.go:221] Checking for updates...
	I1225 18:34:15.832885   47007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:34:15.834093   47007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:34:15.835117   47007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:34:15.836376   47007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:34:15.837457   47007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:34:15.839038   47007 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:34:15.839674   47007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:34:15.865475   47007 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:34:15.865557   47007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:34:15.922787   47007 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-25 18:34:15.912628555 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:34:15.922886   47007 docker.go:319] overlay module found
	I1225 18:34:15.924849   47007 out.go:179] * Using the docker driver based on existing profile
	I1225 18:34:15.926157   47007 start.go:309] selected driver: docker
	I1225 18:34:15.926172   47007 start.go:928] validating driver "docker" against &{Name:functional-984202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-984202 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 18:34:15.926249   47007 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:34:15.927855   47007 out.go:203] 
	W1225 18:34:15.929027   47007 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1225 18:34:15.930555   47007 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984202 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984202 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-984202 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (262.036412ms)

                                                
                                                
-- stdout --
	* [functional-984202] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:33:53.963450   42075 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:33:53.963784   42075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:33:53.963796   42075 out.go:374] Setting ErrFile to fd 2...
	I1225 18:33:53.963802   42075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:33:53.964252   42075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:33:53.964827   42075 out.go:368] Setting JSON to false
	I1225 18:33:53.966034   42075 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":982,"bootTime":1766686652,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:33:53.966109   42075 start.go:143] virtualization: kvm guest
	I1225 18:33:53.967878   42075 out.go:179] * [functional-984202] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1225 18:33:53.970030   42075 notify.go:221] Checking for updates...
	I1225 18:33:53.970350   42075 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:33:53.972041   42075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:33:53.973474   42075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:33:53.974912   42075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:33:53.976274   42075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:33:53.977593   42075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:33:53.979944   42075 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:33:53.980863   42075 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:33:54.019231   42075 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:33:54.019771   42075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:33:54.112343   42075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-25 18:33:54.091933702 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:33:54.112483   42075 docker.go:319] overlay module found
	I1225 18:33:54.114029   42075 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1225 18:33:54.115365   42075 start.go:309] selected driver: docker
	I1225 18:33:54.115382   42075 start.go:928] validating driver "docker" against &{Name:functional-984202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-984202 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 18:33:54.115535   42075 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:33:54.117580   42075 out.go:203] 
	W1225 18:33:54.118800   42075 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1225 18:33:54.120006   42075 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-984202 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-984202 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-55dddb6747-5zj8g" [5428a5c7-8e57-437c-a1b0-42a8c4742f52] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-55dddb6747-5zj8g" [5428a5c7-8e57-437c-a1b0-42a8c4742f52] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003515986s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31071
functional_test.go:1685: http://192.168.49.2:31071: success! body:
Request served by hello-node-connect-55dddb6747-5zj8g

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31071
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [afb78acd-bddf-4818-9978-1c18035f2234] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004801064s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-984202 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-984202 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-984202 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-984202 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [820a06b8-7eb0-4001-9cdb-ef5c50b31eab] Pending
helpers_test.go:353: "sp-pod" [820a06b8-7eb0-4001-9cdb-ef5c50b31eab] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004614452s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-984202 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-984202 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-984202 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f05b2b4c-6ae0-4d85-9f22-cbcfa8c7b738] Pending
helpers_test.go:353: "sp-pod" [f05b2b4c-6ae0-4d85-9f22-cbcfa8c7b738] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003647131s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-984202 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh -n functional-984202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cp functional-984202:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2719358766/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh -n functional-984202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh -n functional-984202 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-984202 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-jwz5d" [9cdc0f83-134e-4b1f-a7c7-a6c37c322954] Pending
helpers_test.go:353: "mysql-6bcdcbc558-jwz5d" [9cdc0f83-134e-4b1f-a7c7-a6c37c322954] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-jwz5d" [9cdc0f83-134e-4b1f-a7c7-a6c37c322954] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.004340433s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;": exit status 1 (106.304808ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1225 18:34:06.530318    9112 retry.go:84] will retry after 1.3s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;": exit status 1 (123.755878ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;": exit status 1 (137.236119ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;": exit status 1 (88.819997ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-984202 exec mysql-6bcdcbc558-jwz5d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/9112/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /etc/test/nested/copy/9112/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/9112.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /etc/ssl/certs/9112.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/9112.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /usr/share/ca-certificates/9112.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/91122.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /etc/ssl/certs/91122.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/91122.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /usr/share/ca-certificates/91122.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-984202 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh "sudo systemctl is-active docker": exit status 1 (312.677589ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh "sudo systemctl is-active containerd": exit status 1 (327.294724ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "404.166842ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "85.208658ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "398.087385ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "69.276022ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984202 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-984202
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984202 image ls --format short --alsologtostderr:
I1225 18:34:17.283143   47911 out.go:360] Setting OutFile to fd 1 ...
I1225 18:34:17.283277   47911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:17.283287   47911 out.go:374] Setting ErrFile to fd 2...
I1225 18:34:17.283293   47911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:17.283581   47911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:34:17.284243   47911 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:17.284341   47911 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:17.284708   47911 cli_runner.go:164] Run: docker container inspect functional-984202 --format={{.State.Status}}
I1225 18:34:17.304047   47911 ssh_runner.go:195] Run: systemctl --version
I1225 18:34:17.304124   47911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984202
I1225 18:34:17.324126   47911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-984202/id_rsa Username:docker}
I1225 18:34:17.418187   47911 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984202 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                              │ 3.6.5-0                               │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy                        │ v1.34.3                               │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-984202                     │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver                    │ v1.34.3                               │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-scheduler                    │ v1.34.3                               │ aec12dadf56dd │ 53.9MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-controller-manager           │ v1.34.3                               │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                                │ functional-984202                     │ 4ecafcae77a52 │ 1.47MB │
│ registry.k8s.io/coredns/coredns                   │ v1.12.1                               │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ localhost/minikube-local-cache-test               │ functional-984202                     │ b72febd6809af │ 3.33kB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 5e3dcc4ab5604 │ 804MB  │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984202 image ls --format table --alsologtostderr:
I1225 18:34:21.692263   49035 out.go:360] Setting OutFile to fd 1 ...
I1225 18:34:21.692391   49035 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:21.692403   49035 out.go:374] Setting ErrFile to fd 2...
I1225 18:34:21.692409   49035 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:21.692708   49035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:34:21.694531   49035 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:21.694787   49035 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:21.696078   49035 cli_runner.go:164] Run: docker container inspect functional-984202 --format={{.State.Status}}
I1225 18:34:21.715534   49035 ssh_runner.go:195] Run: systemctl --version
I1225 18:34:21.715588   49035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984202
I1225 18:34:21.736178   49035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-984202/id_rsa Username:docker}
I1225 18:34:21.824318   49035 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984202 image ls --format json --alsologtostderr:
[{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4eb59c6240ded61adbc2a08c242005459f446bd92c7d4ee9e8c0bf295
ca5716b","repoDigests":["docker.io/library/d5dc14af9429676bc534f7e8497d2e7fbbd3ae092e4bfbaf135b667461740d37-tmp@sha256:9348ad9bf28814f839512578c2f27c7d64f7865932b747997425320d643c1198"],"repoTags":[],"size":"1466132"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4943877"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3
d5927070130","public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k
8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@s
ha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b72febd6809af73c5efd097bef69533eb035ae1d76362738de4985274ee6a725","repoDigests":["localhost/minikube-local-cache-test@sha256:1e53d025fc92acd1b997b248c909702069216ee91c7f1b6c0b9c20b69824fe76"],"repoTags":["localhost/minikube-local-cache-test:functional-984202"],"size":"3330"},{"id":"4ecafcae77a52d7dfc7023dc5dd9350f315db3ac36e15f74638456d0fa659251","repoDigests":["localhost/my-image@sha256:7195677c4a75fc1e3b8c0cf31edb52ecb2a42bb3ccb9a81a259c0ede8cec2c6b"],"repoTags":["localhost/my-image
:functional-984202"],"size":"1468744"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06
650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry
.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984202 image ls --format json --alsologtostderr:
I1225 18:34:21.504454   48938 out.go:360] Setting OutFile to fd 1 ...
I1225 18:34:21.504689   48938 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:21.504697   48938 out.go:374] Setting ErrFile to fd 2...
I1225 18:34:21.504701   48938 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:21.504881   48938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:34:21.505429   48938 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:21.505521   48938 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:21.505933   48938 cli_runner.go:164] Run: docker container inspect functional-984202 --format={{.State.Status}}
I1225 18:34:21.525299   48938 ssh_runner.go:195] Run: systemctl --version
I1225 18:34:21.525342   48938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984202
I1225 18:34:21.543407   48938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-984202/id_rsa Username:docker}
I1225 18:34:21.635751   48938 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls --format yaml --alsologtostderr
I1225 18:34:17.488377    9112 detect.go:223] nested VM detected
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984202 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4943877"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: b72febd6809af73c5efd097bef69533eb035ae1d76362738de4985274ee6a725
repoDigests:
- localhost/minikube-local-cache-test@sha256:1e53d025fc92acd1b997b248c909702069216ee91c7f1b6c0b9c20b69824fe76
repoTags:
- localhost/minikube-local-cache-test:functional-984202
size: "3330"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984202 image ls --format yaml --alsologtostderr:
I1225 18:34:17.525007   47986 out.go:360] Setting OutFile to fd 1 ...
I1225 18:34:17.525132   47986 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:17.525142   47986 out.go:374] Setting ErrFile to fd 2...
I1225 18:34:17.525146   47986 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:17.525307   47986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:34:17.525818   47986 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:17.525938   47986 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:17.526364   47986 cli_runner.go:164] Run: docker container inspect functional-984202 --format={{.State.Status}}
I1225 18:34:17.543867   47986 ssh_runner.go:195] Run: systemctl --version
I1225 18:34:17.543940   47986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984202
I1225 18:34:17.563046   47986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-984202/id_rsa Username:docker}
I1225 18:34:17.654812   47986 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh pgrep buildkitd: exit status 1 (310.505139ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image build -t localhost/my-image:functional-984202 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 image build -t localhost/my-image:functional-984202 testdata/build --alsologtostderr: (3.388657434s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984202 image build -t localhost/my-image:functional-984202 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4eb59c6240d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-984202
--> 4ecafcae77a
Successfully tagged localhost/my-image:functional-984202
4ecafcae77a52d7dfc7023dc5dd9350f315db3ac36e15f74638456d0fa659251
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984202 image build -t localhost/my-image:functional-984202 testdata/build --alsologtostderr:
I1225 18:34:18.067594   48239 out.go:360] Setting OutFile to fd 1 ...
I1225 18:34:18.067748   48239 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:18.067756   48239 out.go:374] Setting ErrFile to fd 2...
I1225 18:34:18.067762   48239 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:34:18.068092   48239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:34:18.068860   48239 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:18.069670   48239 config.go:182] Loaded profile config "functional-984202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1225 18:34:18.070394   48239 cli_runner.go:164] Run: docker container inspect functional-984202 --format={{.State.Status}}
I1225 18:34:18.093161   48239 ssh_runner.go:195] Run: systemctl --version
I1225 18:34:18.093223   48239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984202
I1225 18:34:18.118795   48239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-984202/id_rsa Username:docker}
I1225 18:34:18.218597   48239 build_images.go:162] Building image from path: /tmp/build.1546231572.tar
I1225 18:34:18.218650   48239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1225 18:34:18.230497   48239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1546231572.tar
I1225 18:34:18.235606   48239 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1546231572.tar: stat -c "%s %y" /var/lib/minikube/build/build.1546231572.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1546231572.tar': No such file or directory
I1225 18:34:18.235637   48239 ssh_runner.go:362] scp /tmp/build.1546231572.tar --> /var/lib/minikube/build/build.1546231572.tar (3072 bytes)
I1225 18:34:18.256624   48239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1546231572
I1225 18:34:18.266853   48239 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1546231572 -xf /var/lib/minikube/build/build.1546231572.tar
I1225 18:34:18.277184   48239 crio.go:315] Building image: /var/lib/minikube/build/build.1546231572
I1225 18:34:18.277253   48239 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-984202 /var/lib/minikube/build/build.1546231572 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1225 18:34:21.361094   48239 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-984202 /var/lib/minikube/build/build.1546231572 --cgroup-manager=cgroupfs: (3.083809279s)
I1225 18:34:21.361169   48239 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1546231572
I1225 18:34:21.371269   48239 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1546231572.tar
I1225 18:34:21.379335   48239 build_images.go:218] Built localhost/my-image:functional-984202 from /tmp/build.1546231572.tar
I1225 18:34:21.379374   48239 build_images.go:134] succeeded building to: functional-984202
I1225 18:34:21.379380   48239 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr: (1.323664382s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-984202 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-984202 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-984202 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-984202 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 42222: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr: (2.58541054s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 image ls: (3.508227315s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-984202 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-984202 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [fdcce93b-2b0e-4ea2-9e75-299aed7cfc64] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [fdcce93b-2b0e-4ea2-9e75-299aed7cfc64] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.005359097s
I1225 18:34:08.088940    9112 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdany-port2189058298/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766687634961553070" to /tmp/TestFunctionalparallelMountCmdany-port2189058298/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766687634961553070" to /tmp/TestFunctionalparallelMountCmdany-port2189058298/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766687634961553070" to /tmp/TestFunctionalparallelMountCmdany-port2189058298/001/test-1766687634961553070
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (389.291654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1225 18:33:55.351504    9112 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 25 18:33 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 25 18:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 25 18:33 test-1766687634961553070
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh cat /mount-9p/test-1766687634961553070
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-984202 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [26ceef66-a6e8-4b60-90ff-c63e7ef2640a] Pending
helpers_test.go:353: "busybox-mount" [26ceef66-a6e8-4b60-90ff-c63e7ef2640a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [26ceef66-a6e8-4b60-90ff-c63e7ef2640a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [26ceef66-a6e8-4b60-90ff-c63e7ef2640a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003201211s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-984202 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdany-port2189058298/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr: (1.069266395s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdspecific-port2329794911/001:/mount-9p --alsologtostderr -v=1 --port 34885]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.845784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdspecific-port2329794911/001:/mount-9p --alsologtostderr -v=1 --port 34885] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh "sudo umount -f /mount-9p": exit status 1 (267.248864ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-984202 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdspecific-port2329794911/001:/mount-9p --alsologtostderr -v=1 --port 34885] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406102444/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406102444/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406102444/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T" /mount1: exit status 1 (366.962359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T" /mount2
I1225 18:34:09.402408    9112 detect.go:223] nested VM detected
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-984202 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406102444/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406102444/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1406102444/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984202 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.88.196 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-984202 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-984202 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-984202 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-f68f7994-47dgx" [2551c73f-5d21-41eb-acd3-936b1305422b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-f68f7994-47dgx" [2551c73f-5d21-41eb-acd3-936b1305422b] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004252006s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 service list
functional_test.go:1474: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 service list: (1.770649542s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 service list -o json
2025/12/25 18:34:21 [DEBUG] GET http://127.0.0.1:35713/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1504: (dbg) Done: out/minikube-linux-amd64 -p functional-984202 service list -o json: (1.7550969s)
functional_test.go:1509: Took "1.755185268s" to run "out/minikube-linux-amd64 -p functional-984202 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31421
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-984202 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31421
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-984202
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-984202
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-984202
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22301-5579/.minikube/files/etc/test/nested/copy/9112/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (39.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-969923 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1225 18:34:40.370957    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:40.376727    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:40.386994    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:40.407320    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:40.447634    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:40.528062    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:40.688489    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:41.009073    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:41.649993    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:42.931109    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:45.492839    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:34:50.613432    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:35:00.854081    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-969923 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (39.142340291s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (39.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1225 18:35:07.146008    9112 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-969923 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-969923 --alsologtostderr -v=8: (6.047000057s)
functional_test.go:678: soft start took 6.047387329s for "functional-969923" cluster.
I1225 18:35:13.193363    9112 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-969923 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2536772472/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cache add minikube-local-cache-test:functional-969923
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cache delete minikube-local-cache-test:functional-969923
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-969923
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.920248ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 kubectl -- --context functional-969923 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-969923 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (53.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-969923 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1225 18:35:21.334982    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:36:02.296067    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-969923 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.510756433s)
functional_test.go:776: restart took 53.510864383s for "functional-969923" cluster.
I1225 18:36:12.359233    9112 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (53.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-969923 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-969923 logs: (1.197452506s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3688196965/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-969923 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3688196965/001/logs.txt: (1.204509714s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-969923 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-969923
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-969923: exit status 115 (346.118955ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30326 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-969923 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-969923 delete -f testdata/invalidsvc.yaml: (1.005942214s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 config get cpus: exit status 14 (80.043819ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 config get cpus: exit status 14 (81.264019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (11.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-969923 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-969923 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 64590: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (11.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-969923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-969923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (232.053545ms)

                                                
                                                
-- stdout --
	* [functional-969923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:36:33.645305   63788 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:36:33.645449   63788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:36:33.645455   63788 out.go:374] Setting ErrFile to fd 2...
	I1225 18:36:33.645461   63788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:36:33.645770   63788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:36:33.646418   63788 out.go:368] Setting JSON to false
	I1225 18:36:33.647911   63788 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1142,"bootTime":1766686652,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:36:33.648032   63788 start.go:143] virtualization: kvm guest
	I1225 18:36:33.655082   63788 out.go:179] * [functional-969923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:36:33.656420   63788 notify.go:221] Checking for updates...
	I1225 18:36:33.656483   63788 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:36:33.658375   63788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:36:33.660969   63788 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:36:33.662674   63788 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:36:33.664348   63788 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:36:33.665703   63788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:36:33.667509   63788 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 18:36:33.668313   63788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:36:33.701864   63788 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:36:33.701975   63788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:36:33.787676   63788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-25 18:36:33.769735407 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:36:33.787928   63788 docker.go:319] overlay module found
	I1225 18:36:33.790106   63788 out.go:179] * Using the docker driver based on existing profile
	I1225 18:36:33.791346   63788 start.go:309] selected driver: docker
	I1225 18:36:33.791375   63788 start.go:928] validating driver "docker" against &{Name:functional-969923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-969923 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 18:36:33.791487   63788 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:36:33.793581   63788 out.go:203] 
	W1225 18:36:33.794683   63788 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1225 18:36:33.795685   63788 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-969923 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-969923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-969923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (216.574866ms)

                                                
                                                
-- stdout --
	* [functional-969923] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:36:34.188646   64114 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:36:34.188781   64114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:36:34.188802   64114 out.go:374] Setting ErrFile to fd 2...
	I1225 18:36:34.188809   64114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:36:34.189346   64114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:36:34.189913   64114 out.go:368] Setting JSON to false
	I1225 18:36:34.191321   64114 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1142,"bootTime":1766686652,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:36:34.191407   64114 start.go:143] virtualization: kvm guest
	I1225 18:36:34.193485   64114 out.go:179] * [functional-969923] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1225 18:36:34.198122   64114 notify.go:221] Checking for updates...
	I1225 18:36:34.198138   64114 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:36:34.200389   64114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:36:34.201793   64114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:36:34.203056   64114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:36:34.204207   64114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:36:34.205251   64114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:36:34.209365   64114 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1225 18:36:34.210214   64114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:36:34.240937   64114 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:36:34.241104   64114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:36:34.308649   64114 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-25 18:36:34.296818652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:36:34.308789   64114 docker.go:319] overlay module found
	I1225 18:36:34.310643   64114 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1225 18:36:34.311814   64114 start.go:309] selected driver: docker
	I1225 18:36:34.311834   64114 start.go:928] validating driver "docker" against &{Name:functional-969923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-969923 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1225 18:36:34.312110   64114 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:36:34.314261   64114 out.go:203] 
	W1225 18:36:34.315814   64114 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1225 18:36:34.316994   64114 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (7.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-969923 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-969923 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-dbqdl" [1cac1b0e-2c87-4ce6-98dc-ae0784b19409] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-dbqdl" [1cac1b0e-2c87-4ce6-98dc-ae0784b19409] Running
functional_test.go:1650: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004501558s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30399
functional_test.go:1685: http://192.168.49.2:30399: success! body:
Request served by hello-node-connect-5d95464fd4-dbqdl

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30399
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (7.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (22.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [3f49b6ab-8a9e-4915-98c0-a851759f320e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004004373s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-969923 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-969923 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-969923 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-969923 apply -f testdata/storage-provisioner/pod.yaml
I1225 18:36:28.181929    9112 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [996fbbed-ff04-4908-b530-cd372cf21c4b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [996fbbed-ff04-4908-b530-cd372cf21c4b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004106143s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-969923 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-969923 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-969923 delete -f testdata/storage-provisioner/pod.yaml: (2.150907621s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-969923 apply -f testdata/storage-provisioner/pod.yaml
I1225 18:36:38.603878    9112 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [be81ee87-4429-414a-a713-8f43fad14622] Pending
helpers_test.go:353: "sp-pod" [be81ee87-4429-414a-a713-8f43fad14622] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004170494s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-969923 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (22.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh -n functional-969923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cp functional-969923:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm1722467114/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh -n functional-969923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh -n functional-969923 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (22.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-969923 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-kljv8" [e97f2d46-5c1a-401c-b3ed-273a501a8613] Pending
helpers_test.go:353: "mysql-7d7b65bc95-kljv8" [e97f2d46-5c1a-401c-b3ed-273a501a8613] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-kljv8" [e97f2d46-5c1a-401c-b3ed-273a501a8613] Running
functional_test.go:1809: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 15.004681723s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;": exit status 1 (110.691122ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1225 18:36:45.226361    9112 retry.go:84] will retry after 1.5s: exit status 1 (duplicate log for 2m49.9s)
functional_test.go:1817: (dbg) Run:  kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;": exit status 1 (130.021887ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;": exit status 1 (107.208663ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;": exit status 1 (90.996552ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-969923 exec mysql-7d7b65bc95-kljv8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (22.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/9112/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /etc/test/nested/copy/9112/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/9112.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /etc/ssl/certs/9112.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/9112.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /usr/share/ca-certificates/9112.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/91122.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /etc/ssl/certs/91122.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/91122.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /usr/share/ca-certificates/91122.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-969923 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh "sudo systemctl is-active docker": exit status 1 (315.468007ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh "sudo systemctl is-active containerd": exit status 1 (310.882355ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-969923 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-969923
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-969923 image ls --format short --alsologtostderr:
I1225 18:36:45.710412   65775 out.go:360] Setting OutFile to fd 1 ...
I1225 18:36:45.710701   65775 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:45.710713   65775 out.go:374] Setting ErrFile to fd 2...
I1225 18:36:45.710720   65775 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:45.711018   65775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:36:45.711804   65775 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:45.711955   65775 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:45.712474   65775 cli_runner.go:164] Run: docker container inspect functional-969923 --format={{.State.Status}}
I1225 18:36:45.737759   65775 ssh_runner.go:195] Run: systemctl --version
I1225 18:36:45.737842   65775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-969923
I1225 18:36:45.760627   65775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-969923/id_rsa Username:docker}
I1225 18:36:45.856348   65775 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-969923 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ registry.k8s.io/kube-proxy                        │ v1.35.0-rc.1                          │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-969923                     │ 9056ab77afb8e │ 4.95MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 5e3dcc4ab5604 │ 804MB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0-rc.1                          │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0-rc.1                          │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test               │ functional-969923                     │ b72febd6809af │ 3.33kB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0-rc.1                          │ 58865405a13bc │ 90.8MB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-969923 image ls --format table --alsologtostderr:
I1225 18:36:46.403736   66396 out.go:360] Setting OutFile to fd 1 ...
I1225 18:36:46.403827   66396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:46.403832   66396 out.go:374] Setting ErrFile to fd 2...
I1225 18:36:46.403836   66396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:46.404077   66396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:36:46.404652   66396 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:46.404769   66396 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:46.405359   66396 cli_runner.go:164] Run: docker container inspect functional-969923 --format={{.State.Status}}
I1225 18:36:46.424483   66396 ssh_runner.go:195] Run: systemctl --version
I1225 18:36:46.424543   66396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-969923
I1225 18:36:46.443870   66396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-969923/id_rsa Username:docker}
I1225 18:36:46.534740   66396 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-969923 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e96
0ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b72febd6809af73c5efd097bef69533eb035ae1d76362738de4985274ee6a725","repoDigests":["localhost/minikube-local-cache-test@sha256:1e53d025fc92acd1b997b248c909702069216ee91c7f1b6c0b9c20b69824fe76"],"repoTags":["localhost/minikube-local-cache-test:functional-969923"],"size":"3330"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130","public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},
{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"4921d7a6dffa922dd679732ba
4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/bu
sybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4945246"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id"
:"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0
d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size"
:"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-969923 image ls --format json --alsologtostderr:
I1225 18:36:46.147381   66110 out.go:360] Setting OutFile to fd 1 ...
I1225 18:36:46.147988   66110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:46.147999   66110 out.go:374] Setting ErrFile to fd 2...
I1225 18:36:46.148006   66110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:46.148492   66110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:36:46.149550   66110 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:46.149808   66110 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:46.150728   66110 cli_runner.go:164] Run: docker container inspect functional-969923 --format={{.State.Status}}
I1225 18:36:46.173864   66110 ssh_runner.go:195] Run: systemctl --version
I1225 18:36:46.173947   66110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-969923
I1225 18:36:46.196691   66110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-969923/id_rsa Username:docker}
I1225 18:36:46.290246   66110 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-969923 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4945246"
- id: b72febd6809af73c5efd097bef69533eb035ae1d76362738de4985274ee6a725
repoDigests:
- localhost/minikube-local-cache-test@sha256:1e53d025fc92acd1b997b248c909702069216ee91c7f1b6c0b9c20b69824fe76
repoTags:
- localhost/minikube-local-cache-test:functional-969923
size: "3330"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-969923 image ls --format yaml --alsologtostderr:
I1225 18:36:45.853003   65943 out.go:360] Setting OutFile to fd 1 ...
I1225 18:36:45.853109   65943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:45.853121   65943 out.go:374] Setting ErrFile to fd 2...
I1225 18:36:45.853128   65943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:45.853348   65943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:36:45.854034   65943 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:45.854137   65943 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:45.854532   65943 cli_runner.go:164] Run: docker container inspect functional-969923 --format={{.State.Status}}
I1225 18:36:45.875319   65943 ssh_runner.go:195] Run: systemctl --version
I1225 18:36:45.875387   65943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-969923
I1225 18:36:45.898175   65943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-969923/id_rsa Username:docker}
I1225 18:36:45.993315   65943 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh pgrep buildkitd: exit status 1 (329.171388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image build -t localhost/my-image:functional-969923 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-969923 image build -t localhost/my-image:functional-969923 testdata/build --alsologtostderr: (2.874648221s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-969923 image build -t localhost/my-image:functional-969923 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e0855b4ee27
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-969923
--> 2a3e0b5a238
Successfully tagged localhost/my-image:functional-969923
2a3e0b5a2384596e67653276d62678c6b7f9a766d15089843129b55d9de2b4f6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-969923 image build -t localhost/my-image:functional-969923 testdata/build --alsologtostderr:
I1225 18:36:46.295090   66325 out.go:360] Setting OutFile to fd 1 ...
I1225 18:36:46.295524   66325 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:46.295533   66325 out.go:374] Setting ErrFile to fd 2...
I1225 18:36:46.295537   66325 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1225 18:36:46.295720   66325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
I1225 18:36:46.296291   66325 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:46.296948   66325 config.go:182] Loaded profile config "functional-969923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1225 18:36:46.297417   66325 cli_runner.go:164] Run: docker container inspect functional-969923 --format={{.State.Status}}
I1225 18:36:46.319550   66325 ssh_runner.go:195] Run: systemctl --version
I1225 18:36:46.319608   66325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-969923
I1225 18:36:46.342657   66325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/functional-969923/id_rsa Username:docker}
I1225 18:36:46.440930   66325 build_images.go:162] Building image from path: /tmp/build.2907306632.tar
I1225 18:36:46.441002   66325 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1225 18:36:46.450008   66325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2907306632.tar
I1225 18:36:46.453845   66325 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2907306632.tar: stat -c "%s %y" /var/lib/minikube/build/build.2907306632.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2907306632.tar': No such file or directory
I1225 18:36:46.453874   66325 ssh_runner.go:362] scp /tmp/build.2907306632.tar --> /var/lib/minikube/build/build.2907306632.tar (3072 bytes)
I1225 18:36:46.471848   66325 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2907306632
I1225 18:36:46.479945   66325 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2907306632 -xf /var/lib/minikube/build/build.2907306632.tar
I1225 18:36:46.488601   66325 crio.go:315] Building image: /var/lib/minikube/build/build.2907306632
I1225 18:36:46.488661   66325 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-969923 /var/lib/minikube/build/build.2907306632 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1225 18:36:49.076610   66325 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-969923 /var/lib/minikube/build/build.2907306632 --cgroup-manager=cgroupfs: (2.587922959s)
I1225 18:36:49.076677   66325 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2907306632
I1225 18:36:49.085107   66325 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2907306632.tar
I1225 18:36:49.092930   66325 build_images.go:218] Built localhost/my-image:functional-969923 from /tmp/build.2907306632.tar
I1225 18:36:49.092961   66325 build_images.go:134] succeeded building to: functional-969923
I1225 18:36:49.092966   66325 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-969923 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 --alsologtostderr: (1.124283961s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 update-context --alsologtostderr -v=2
2025/12/25 18:36:45 [DEBUG] GET http://127.0.0.1:36911/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-969923 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-969923 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-rjvd8" [52dc1082-4d68-43a8-8d55-24dd3d9f97f5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-rjvd8" [52dc1082-4d68-43a8-8d55-24dd3d9f97f5] Running
functional_test.go:1465: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005854943s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-969923 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-969923 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-969923 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 60389: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-969923 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-969923 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (7.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-969923 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [90f18b6e-c44d-4f34-bc20-e6e6cdfa9b49] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [90f18b6e-c44d-4f34-bc20-e6e6cdfa9b49] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 7.00385091s
I1225 18:36:29.461018    9112 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (7.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 service list -o json
functional_test.go:1509: Took "503.131407ms" to run "out/minikube-linux-amd64 -p functional-969923 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31692
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-969923 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.15.120 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-969923 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31692
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "452.840642ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "101.967766ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (13.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun768989110/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766687792838900991" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun768989110/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766687792838900991" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun768989110/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766687792838900991" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun768989110/001/test-1766687792838900991
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (370.767034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1225 18:36:33.210330    9112 retry.go:84] will retry after 400ms: exit status 1 (duplicate log for 2m37.9s)
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 25 18:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 25 18:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 25 18:36 test-1766687792838900991
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh cat /mount-9p/test-1766687792838900991
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-969923 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c009c0a5-f11c-4118-976e-536b12012d85] Pending
helpers_test.go:353: "busybox-mount" [c009c0a5-f11c-4118-976e-536b12012d85] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c009c0a5-f11c-4118-976e-536b12012d85] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c009c0a5-f11c-4118-976e-536b12012d85] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003907093s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-969923 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun768989110/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (13.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "453.965933ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "83.267983ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun722191136/001:/mount-9p --alsologtostderr -v=1 --port 34281]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.050626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun722191136/001:/mount-9p --alsologtostderr -v=1 --port 34281] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh "sudo umount -f /mount-9p": exit status 1 (275.762254ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-969923 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun722191136/001:/mount-9p --alsologtostderr -v=1 --port 34281] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3212395188/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3212395188/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3212395188/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T" /mount1: exit status 1 (370.931686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-969923 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-969923 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3212395188/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3212395188/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-969923 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3212395188/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-969923
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-969923
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-969923
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (113.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1225 18:37:24.216429    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m52.576053263s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (113.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 kubectl -- rollout status deployment/busybox: (3.058821851s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-924r9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-fd4kp -- nslookup kubernetes.io
E1225 18:38:52.419824    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:38:52.425166    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:38:52.435439    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:38:52.455746    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:38:52.496051    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-nrhzt -- nslookup kubernetes.io
E1225 18:38:52.576524    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-924r9 -- nslookup kubernetes.default
E1225 18:38:52.736914    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-fd4kp -- nslookup kubernetes.default
E1225 18:38:53.057316    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-nrhzt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-924r9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-fd4kp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-nrhzt -- nslookup kubernetes.default.svc.cluster.local
E1225 18:38:53.697664    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DeployApp (4.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-924r9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-924r9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-fd4kp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-fd4kp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-nrhzt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 kubectl -- exec busybox-7b57f96db7-nrhzt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node add --alsologtostderr -v 5
E1225 18:38:54.978039    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:38:57.539728    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:39:02.660556    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:39:12.901059    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 node add --alsologtostderr -v 5: (24.858782568s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-535959 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp testdata/cp-test.txt ha-535959:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3337616598/001/cp-test_ha-535959.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959:/home/docker/cp-test.txt ha-535959-m02:/home/docker/cp-test_ha-535959_ha-535959-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test_ha-535959_ha-535959-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959:/home/docker/cp-test.txt ha-535959-m03:/home/docker/cp-test_ha-535959_ha-535959-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test_ha-535959_ha-535959-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959:/home/docker/cp-test.txt ha-535959-m04:/home/docker/cp-test_ha-535959_ha-535959-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test_ha-535959_ha-535959-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp testdata/cp-test.txt ha-535959-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3337616598/001/cp-test_ha-535959-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m02:/home/docker/cp-test.txt ha-535959:/home/docker/cp-test_ha-535959-m02_ha-535959.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test_ha-535959-m02_ha-535959.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m02:/home/docker/cp-test.txt ha-535959-m03:/home/docker/cp-test_ha-535959-m02_ha-535959-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test_ha-535959-m02_ha-535959-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m02:/home/docker/cp-test.txt ha-535959-m04:/home/docker/cp-test_ha-535959-m02_ha-535959-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test_ha-535959-m02_ha-535959-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp testdata/cp-test.txt ha-535959-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3337616598/001/cp-test_ha-535959-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m03:/home/docker/cp-test.txt ha-535959:/home/docker/cp-test_ha-535959-m03_ha-535959.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test_ha-535959-m03_ha-535959.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m03:/home/docker/cp-test.txt ha-535959-m02:/home/docker/cp-test_ha-535959-m03_ha-535959-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test_ha-535959-m03_ha-535959-m02.txt"
E1225 18:39:33.382057    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m03:/home/docker/cp-test.txt ha-535959-m04:/home/docker/cp-test_ha-535959-m03_ha-535959-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test_ha-535959-m03_ha-535959-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp testdata/cp-test.txt ha-535959-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3337616598/001/cp-test_ha-535959-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m04:/home/docker/cp-test.txt ha-535959:/home/docker/cp-test_ha-535959-m04_ha-535959.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959 "sudo cat /home/docker/cp-test_ha-535959-m04_ha-535959.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m04:/home/docker/cp-test.txt ha-535959-m02:/home/docker/cp-test_ha-535959-m04_ha-535959-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m02 "sudo cat /home/docker/cp-test_ha-535959-m04_ha-535959-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 cp ha-535959-m04:/home/docker/cp-test.txt ha-535959-m03:/home/docker/cp-test_ha-535959-m04_ha-535959-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 ssh -n ha-535959-m03 "sudo cat /home/docker/cp-test_ha-535959-m04_ha-535959-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node stop m02 --alsologtostderr -v 5
E1225 18:39:40.369778    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 node stop m02 --alsologtostderr -v 5: (13.138864248s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5: exit status 7 (702.919817ms)

                                                
                                                
-- stdout --
	ha-535959
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-535959-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-535959-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-535959-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:39:51.628702   87636 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:39:51.628800   87636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:39:51.628808   87636 out.go:374] Setting ErrFile to fd 2...
	I1225 18:39:51.628811   87636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:39:51.629026   87636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:39:51.629199   87636 out.go:368] Setting JSON to false
	I1225 18:39:51.629239   87636 mustload.go:66] Loading cluster: ha-535959
	I1225 18:39:51.629284   87636 notify.go:221] Checking for updates...
	I1225 18:39:51.629702   87636 config.go:182] Loaded profile config "ha-535959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:39:51.629720   87636 status.go:174] checking status of ha-535959 ...
	I1225 18:39:51.630200   87636 cli_runner.go:164] Run: docker container inspect ha-535959 --format={{.State.Status}}
	I1225 18:39:51.649476   87636 status.go:371] ha-535959 host status = "Running" (err=<nil>)
	I1225 18:39:51.649506   87636 host.go:66] Checking if "ha-535959" exists ...
	I1225 18:39:51.649793   87636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-535959
	I1225 18:39:51.667074   87636 host.go:66] Checking if "ha-535959" exists ...
	I1225 18:39:51.667321   87636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 18:39:51.667368   87636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-535959
	I1225 18:39:51.685644   87636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/ha-535959/id_rsa Username:docker}
	I1225 18:39:51.774539   87636 ssh_runner.go:195] Run: systemctl --version
	I1225 18:39:51.780883   87636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:39:51.793410   87636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:39:51.854380   87636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-25 18:39:51.843689514 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:39:51.854908   87636 kubeconfig.go:125] found "ha-535959" server: "https://192.168.49.254:8443"
	I1225 18:39:51.854942   87636 api_server.go:166] Checking apiserver status ...
	I1225 18:39:51.854992   87636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 18:39:51.868048   87636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1225 18:39:51.876593   87636 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1250/cgroup
	I1225 18:39:51.884494   87636 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-e053aefe9c3b954d0cf0197fadd5115ce8f44c7a66724f6561195cd49a1a6313.scope/container/cgroup.freeze
	I1225 18:39:51.892071   87636 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1225 18:39:51.896267   87636 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1225 18:39:51.896290   87636 status.go:463] ha-535959 apiserver status = Running (err=<nil>)
	I1225 18:39:51.896302   87636 status.go:176] ha-535959 status: &{Name:ha-535959 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:39:51.896316   87636 status.go:174] checking status of ha-535959-m02 ...
	I1225 18:39:51.896557   87636 cli_runner.go:164] Run: docker container inspect ha-535959-m02 --format={{.State.Status}}
	I1225 18:39:51.915749   87636 status.go:371] ha-535959-m02 host status = "Stopped" (err=<nil>)
	I1225 18:39:51.915770   87636 status.go:384] host is not running, skipping remaining checks
	I1225 18:39:51.915776   87636 status.go:176] ha-535959-m02 status: &{Name:ha-535959-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:39:51.915807   87636 status.go:174] checking status of ha-535959-m03 ...
	I1225 18:39:51.916055   87636 cli_runner.go:164] Run: docker container inspect ha-535959-m03 --format={{.State.Status}}
	I1225 18:39:51.934714   87636 status.go:371] ha-535959-m03 host status = "Running" (err=<nil>)
	I1225 18:39:51.934737   87636 host.go:66] Checking if "ha-535959-m03" exists ...
	I1225 18:39:51.935050   87636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-535959-m03
	I1225 18:39:51.953736   87636 host.go:66] Checking if "ha-535959-m03" exists ...
	I1225 18:39:51.954033   87636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 18:39:51.954068   87636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-535959-m03
	I1225 18:39:51.972590   87636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/ha-535959-m03/id_rsa Username:docker}
	I1225 18:39:52.061804   87636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:39:52.075199   87636 kubeconfig.go:125] found "ha-535959" server: "https://192.168.49.254:8443"
	I1225 18:39:52.075226   87636 api_server.go:166] Checking apiserver status ...
	I1225 18:39:52.075263   87636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 18:39:52.086521   87636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1194/cgroup
	I1225 18:39:52.095184   87636 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1194/cgroup
	I1225 18:39:52.102924   87636 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-cd0f1993436548cbd86dfc22d8966da289cbdc514b77454f562a16a5d203ce56.scope/container/cgroup.freeze
	I1225 18:39:52.110535   87636 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1225 18:39:52.114530   87636 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1225 18:39:52.114554   87636 status.go:463] ha-535959-m03 apiserver status = Running (err=<nil>)
	I1225 18:39:52.114562   87636 status.go:176] ha-535959-m03 status: &{Name:ha-535959-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:39:52.114579   87636 status.go:174] checking status of ha-535959-m04 ...
	I1225 18:39:52.114847   87636 cli_runner.go:164] Run: docker container inspect ha-535959-m04 --format={{.State.Status}}
	I1225 18:39:52.133616   87636 status.go:371] ha-535959-m04 host status = "Running" (err=<nil>)
	I1225 18:39:52.133644   87636 host.go:66] Checking if "ha-535959-m04" exists ...
	I1225 18:39:52.133925   87636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-535959-m04
	I1225 18:39:52.151951   87636 host.go:66] Checking if "ha-535959-m04" exists ...
	I1225 18:39:52.152216   87636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 18:39:52.152250   87636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-535959-m04
	I1225 18:39:52.170719   87636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/ha-535959-m04/id_rsa Username:docker}
	I1225 18:39:52.260084   87636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:39:52.272542   87636 status.go:176] ha-535959-m04 status: &{Name:ha-535959-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 node start m02 --alsologtostderr -v 5: (7.797115895s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 stop --alsologtostderr -v 5
E1225 18:40:08.059075    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:40:14.343642    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 stop --alsologtostderr -v 5: (39.604761081s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 start --wait true --alsologtostderr -v 5
E1225 18:41:20.321946    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.327298    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.337625    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.357935    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.398379    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.478747    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.639188    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:20.959448    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:21.600189    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:22.880496    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:25.440753    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:30.561667    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:36.264608    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:41:40.801853    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 start --wait true --alsologtostderr -v 5: (1m7.747094351s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 node delete m03 --alsologtostderr -v 5: (9.812499498s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1225 18:42:01.282132    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (31.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 stop --alsologtostderr -v 5: (31.872562628s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5: exit status 7 (113.909617ms)

                                                
                                                
-- stdout --
	ha-535959
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-535959-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-535959-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:42:33.416369  101522 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:42:33.416630  101522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:42:33.416640  101522 out.go:374] Setting ErrFile to fd 2...
	I1225 18:42:33.416644  101522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:42:33.416812  101522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:42:33.416989  101522 out.go:368] Setting JSON to false
	I1225 18:42:33.417022  101522 mustload.go:66] Loading cluster: ha-535959
	I1225 18:42:33.417160  101522 notify.go:221] Checking for updates...
	I1225 18:42:33.417539  101522 config.go:182] Loaded profile config "ha-535959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:42:33.417561  101522 status.go:174] checking status of ha-535959 ...
	I1225 18:42:33.418112  101522 cli_runner.go:164] Run: docker container inspect ha-535959 --format={{.State.Status}}
	I1225 18:42:33.437354  101522 status.go:371] ha-535959 host status = "Stopped" (err=<nil>)
	I1225 18:42:33.437376  101522 status.go:384] host is not running, skipping remaining checks
	I1225 18:42:33.437384  101522 status.go:176] ha-535959 status: &{Name:ha-535959 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:42:33.437449  101522 status.go:174] checking status of ha-535959-m02 ...
	I1225 18:42:33.437690  101522 cli_runner.go:164] Run: docker container inspect ha-535959-m02 --format={{.State.Status}}
	I1225 18:42:33.455020  101522 status.go:371] ha-535959-m02 host status = "Stopped" (err=<nil>)
	I1225 18:42:33.455062  101522 status.go:384] host is not running, skipping remaining checks
	I1225 18:42:33.455070  101522 status.go:176] ha-535959-m02 status: &{Name:ha-535959-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:42:33.455089  101522 status.go:174] checking status of ha-535959-m04 ...
	I1225 18:42:33.455325  101522 cli_runner.go:164] Run: docker container inspect ha-535959-m04 --format={{.State.Status}}
	I1225 18:42:33.473511  101522 status.go:371] ha-535959-m04 host status = "Stopped" (err=<nil>)
	I1225 18:42:33.473535  101522 status.go:384] host is not running, skipping remaining checks
	I1225 18:42:33.473542  101522 status.go:176] ha-535959-m04 status: &{Name:ha-535959-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (31.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1225 18:42:42.243072    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.680987608s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 node add --control-plane --alsologtostderr -v 5
E1225 18:43:52.419845    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:44:04.164081    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-535959 node add --control-plane --alsologtostderr -v 5: (42.725597757s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-535959 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-312540 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1225 18:44:40.370106    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-312540 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (42.070536599s)
--- PASS: TestJSONOutput/start/Command (42.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-312540 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-312540 --output=json --user=testUser: (8.001404331s)
--- PASS: TestJSONOutput/stop/Command (8.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-356788 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-356788 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.784867ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a2702826-86f2-4933-b483-21b707bd8002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-356788] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"faa409b5-5865-4de9-949f-4513ddafa65b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22301"}}
	{"specversion":"1.0","id":"9e70c544-0efb-4248-982e-d5af1b440231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad285cd5-43fb-4bf4-aa63-56700447fbfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig"}}
	{"specversion":"1.0","id":"c36f02dc-c64d-414b-a132-b46da093c246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube"}}
	{"specversion":"1.0","id":"6555db84-aeac-473d-870e-bdd79ea249be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f46b4041-dd8d-4f3c-965c-23c074ef8f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b6392647-54f0-4417-be3b-1449f9088621","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-356788" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-356788
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-625685 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-625685 --network=: (26.246853685s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-625685" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-625685
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-625685: (2.148153504s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-284088 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-284088 --network=bridge: (22.534326832s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-284088" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-284088
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-284088: (2.054939251s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.61s)

                                                
                                    
x
+
TestKicExistingNetwork (21.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1225 18:46:16.548750    9112 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1225 18:46:16.568100    9112 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1225 18:46:16.568183    9112 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1225 18:46:16.568210    9112 cli_runner.go:164] Run: docker network inspect existing-network
W1225 18:46:16.586883    9112 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1225 18:46:16.586941    9112 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1225 18:46:16.586962    9112 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1225 18:46:16.587095    9112 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1225 18:46:16.605004    9112 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ced36c84bfdd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:07:5b:3f:80} reservation:<nil>}
I1225 18:46:16.605434    9112 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d20240}
I1225 18:46:16.605467    9112 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1225 18:46:16.605522    9112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1225 18:46:16.653288    9112 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-312240 --network=existing-network
E1225 18:46:20.323162    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-312240 --network=existing-network: (19.559297609s)
helpers_test.go:176: Cleaning up "existing-network-312240" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-312240
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-312240: (1.986564199s)
I1225 18:46:38.216508    9112 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (21.68s)

                                                
                                    
x
+
TestKicCustomSubnet (26.71s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-615417 --subnet=192.168.60.0/24
E1225 18:46:48.004727    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-615417 --subnet=192.168.60.0/24: (24.536590665s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-615417 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-615417" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-615417
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-615417: (2.152234595s)
--- PASS: TestKicCustomSubnet (26.71s)

                                                
                                    
x
+
TestKicStaticIP (21.86s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-120861 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-120861 --static-ip=192.168.200.200: (19.57898806s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-120861 ip
helpers_test.go:176: Cleaning up "static-ip-120861" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-120861
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-120861: (2.132206771s)
--- PASS: TestKicStaticIP (21.86s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-799830 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-799830 --driver=docker  --container-runtime=crio: (21.77409534s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-801805 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-801805 --driver=docker  --container-runtime=crio: (22.756855492s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-799830
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-801805
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-801805" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-801805
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-801805: (2.328078233s)
helpers_test.go:176: Cleaning up "first-799830" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-799830
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-799830: (2.342946458s)
--- PASS: TestMinikubeProfile (50.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-440427 --memory=3072 --mount-string /tmp/TestMountStartserial2977179996/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-440427 --memory=3072 --mount-string /tmp/TestMountStartserial2977179996/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.600783045s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-440427 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-453795 --memory=3072 --mount-string /tmp/TestMountStartserial2977179996/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-453795 --memory=3072 --mount-string /tmp/TestMountStartserial2977179996/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.677315437s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453795 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-440427 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-440427 --alsologtostderr -v=5: (1.680273561s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453795 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-453795
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-453795: (1.257677333s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-453795
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-453795: (6.062517366s)
--- PASS: TestMountStart/serial/RestartStopped (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453795 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415097 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1225 18:48:52.419597    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 18:49:40.370656    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415097 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.397538781s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-415097 -- rollout status deployment/busybox: (2.094320875s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-9sd5l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-lz4ds -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-9sd5l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-lz4ds -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-9sd5l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-lz4ds -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-9sd5l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-9sd5l -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-lz4ds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415097 -- exec busybox-7b57f96db7-lz4ds -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-415097 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-415097 -v=5 --alsologtostderr: (26.157671307s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.80s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-415097 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp testdata/cp-test.txt multinode-415097:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3454589175/001/cp-test_multinode-415097.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097:/home/docker/cp-test.txt multinode-415097-m02:/home/docker/cp-test_multinode-415097_multinode-415097-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m02 "sudo cat /home/docker/cp-test_multinode-415097_multinode-415097-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097:/home/docker/cp-test.txt multinode-415097-m03:/home/docker/cp-test_multinode-415097_multinode-415097-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m03 "sudo cat /home/docker/cp-test_multinode-415097_multinode-415097-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp testdata/cp-test.txt multinode-415097-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3454589175/001/cp-test_multinode-415097-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097-m02:/home/docker/cp-test.txt multinode-415097:/home/docker/cp-test_multinode-415097-m02_multinode-415097.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097 "sudo cat /home/docker/cp-test_multinode-415097-m02_multinode-415097.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097-m02:/home/docker/cp-test.txt multinode-415097-m03:/home/docker/cp-test_multinode-415097-m02_multinode-415097-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m03 "sudo cat /home/docker/cp-test_multinode-415097-m02_multinode-415097-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp testdata/cp-test.txt multinode-415097-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3454589175/001/cp-test_multinode-415097-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097-m03:/home/docker/cp-test.txt multinode-415097:/home/docker/cp-test_multinode-415097-m03_multinode-415097.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097 "sudo cat /home/docker/cp-test_multinode-415097-m03_multinode-415097.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 cp multinode-415097-m03:/home/docker/cp-test.txt multinode-415097-m02:/home/docker/cp-test_multinode-415097-m03_multinode-415097-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 ssh -n multinode-415097-m02 "sudo cat /home/docker/cp-test_multinode-415097-m03_multinode-415097-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-415097 node stop m03: (1.275497775s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415097 status: exit status 7 (488.365542ms)

                                                
                                                
-- stdout --
	multinode-415097
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415097-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415097-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr: exit status 7 (488.325734ms)

                                                
                                                
-- stdout --
	multinode-415097
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415097-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415097-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:50:31.599327  161580 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:50:31.599426  161580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:50:31.599434  161580 out.go:374] Setting ErrFile to fd 2...
	I1225 18:50:31.599438  161580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:50:31.599603  161580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:50:31.599743  161580 out.go:368] Setting JSON to false
	I1225 18:50:31.599771  161580 mustload.go:66] Loading cluster: multinode-415097
	I1225 18:50:31.599887  161580 notify.go:221] Checking for updates...
	I1225 18:50:31.600148  161580 config.go:182] Loaded profile config "multinode-415097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:50:31.600169  161580 status.go:174] checking status of multinode-415097 ...
	I1225 18:50:31.600590  161580 cli_runner.go:164] Run: docker container inspect multinode-415097 --format={{.State.Status}}
	I1225 18:50:31.619754  161580 status.go:371] multinode-415097 host status = "Running" (err=<nil>)
	I1225 18:50:31.619773  161580 host.go:66] Checking if "multinode-415097" exists ...
	I1225 18:50:31.620038  161580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-415097
	I1225 18:50:31.638281  161580 host.go:66] Checking if "multinode-415097" exists ...
	I1225 18:50:31.638615  161580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 18:50:31.638660  161580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-415097
	I1225 18:50:31.656243  161580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/multinode-415097/id_rsa Username:docker}
	I1225 18:50:31.744269  161580 ssh_runner.go:195] Run: systemctl --version
	I1225 18:50:31.750886  161580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:50:31.763160  161580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:50:31.818596  161580 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-25 18:50:31.808192704 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:50:31.819291  161580 kubeconfig.go:125] found "multinode-415097" server: "https://192.168.67.2:8443"
	I1225 18:50:31.819364  161580 api_server.go:166] Checking apiserver status ...
	I1225 18:50:31.819410  161580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1225 18:50:31.830866  161580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1225 18:50:31.840652  161580 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1261/cgroup
	I1225 18:50:31.847977  161580 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-8a777c11a3c8c46823d495b2258d4c90ae1ada745198cc8f2824369a35acf8f6.scope/container/cgroup.freeze
	I1225 18:50:31.855138  161580 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1225 18:50:31.860091  161580 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1225 18:50:31.860113  161580 status.go:463] multinode-415097 apiserver status = Running (err=<nil>)
	I1225 18:50:31.860121  161580 status.go:176] multinode-415097 status: &{Name:multinode-415097 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:50:31.860136  161580 status.go:174] checking status of multinode-415097-m02 ...
	I1225 18:50:31.860372  161580 cli_runner.go:164] Run: docker container inspect multinode-415097-m02 --format={{.State.Status}}
	I1225 18:50:31.877193  161580 status.go:371] multinode-415097-m02 host status = "Running" (err=<nil>)
	I1225 18:50:31.877220  161580 host.go:66] Checking if "multinode-415097-m02" exists ...
	I1225 18:50:31.877491  161580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-415097-m02
	I1225 18:50:31.894635  161580 host.go:66] Checking if "multinode-415097-m02" exists ...
	I1225 18:50:31.894951  161580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1225 18:50:31.894999  161580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-415097-m02
	I1225 18:50:31.912051  161580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22301-5579/.minikube/machines/multinode-415097-m02/id_rsa Username:docker}
	I1225 18:50:31.999991  161580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1225 18:50:32.012246  161580 status.go:176] multinode-415097-m02 status: &{Name:multinode-415097-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:50:32.012286  161580 status.go:174] checking status of multinode-415097-m03 ...
	I1225 18:50:32.012540  161580 cli_runner.go:164] Run: docker container inspect multinode-415097-m03 --format={{.State.Status}}
	I1225 18:50:32.030204  161580 status.go:371] multinode-415097-m03 host status = "Stopped" (err=<nil>)
	I1225 18:50:32.030224  161580 status.go:384] host is not running, skipping remaining checks
	I1225 18:50:32.030230  161580 status.go:176] multinode-415097-m03 status: &{Name:multinode-415097-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-415097 node start m03 -v=5 --alsologtostderr: (6.33120956s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415097
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-415097
E1225 18:51:03.419889    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-415097: (29.569016754s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415097 --wait=true -v=5 --alsologtostderr
E1225 18:51:20.321539    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415097 --wait=true -v=5 --alsologtostderr: (42.565499601s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415097
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-415097 node delete m03: (4.668662737s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-415097 stop: (28.321491697s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415097 status: exit status 7 (97.3285ms)

                                                
                                                
-- stdout --
	multinode-415097
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415097-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr: exit status 7 (98.386963ms)

                                                
                                                
-- stdout --
	multinode-415097
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415097-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:52:25.031152  171381 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:52:25.031256  171381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:52:25.031264  171381 out.go:374] Setting ErrFile to fd 2...
	I1225 18:52:25.031269  171381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:52:25.031449  171381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:52:25.031603  171381 out.go:368] Setting JSON to false
	I1225 18:52:25.031630  171381 mustload.go:66] Loading cluster: multinode-415097
	I1225 18:52:25.031696  171381 notify.go:221] Checking for updates...
	I1225 18:52:25.031975  171381 config.go:182] Loaded profile config "multinode-415097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:52:25.031989  171381 status.go:174] checking status of multinode-415097 ...
	I1225 18:52:25.032388  171381 cli_runner.go:164] Run: docker container inspect multinode-415097 --format={{.State.Status}}
	I1225 18:52:25.051960  171381 status.go:371] multinode-415097 host status = "Stopped" (err=<nil>)
	I1225 18:52:25.052009  171381 status.go:384] host is not running, skipping remaining checks
	I1225 18:52:25.052024  171381 status.go:176] multinode-415097 status: &{Name:multinode-415097 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1225 18:52:25.052062  171381 status.go:174] checking status of multinode-415097-m02 ...
	I1225 18:52:25.052465  171381 cli_runner.go:164] Run: docker container inspect multinode-415097-m02 --format={{.State.Status}}
	I1225 18:52:25.071752  171381 status.go:371] multinode-415097-m02 host status = "Stopped" (err=<nil>)
	I1225 18:52:25.071772  171381 status.go:384] host is not running, skipping remaining checks
	I1225 18:52:25.071778  171381 status.go:176] multinode-415097-m02 status: &{Name:multinode-415097-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415097 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415097 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.003674199s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415097 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415097
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415097-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-415097-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.629402ms)

                                                
                                                
-- stdout --
	* [multinode-415097-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-415097-m02' is duplicated with machine name 'multinode-415097-m02' in profile 'multinode-415097'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415097-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415097-m03 --driver=docker  --container-runtime=crio: (19.50848703s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-415097
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-415097: exit status 80 (285.748771ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-415097 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-415097-m03 already exists in multinode-415097-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-415097-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-415097-m03: (2.338893463s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.27s)

                                                
                                    
x
+
TestScheduledStopUnix (95.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-179184 --memory=3072 --driver=docker  --container-runtime=crio
E1225 18:53:52.420618    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-179184 --memory=3072 --driver=docker  --container-runtime=crio: (19.668590292s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179184 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1225 18:53:59.893074  181335 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:53:59.893191  181335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:53:59.893200  181335 out.go:374] Setting ErrFile to fd 2...
	I1225 18:53:59.893207  181335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:53:59.893441  181335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:53:59.893678  181335 out.go:368] Setting JSON to false
	I1225 18:53:59.893775  181335 mustload.go:66] Loading cluster: scheduled-stop-179184
	I1225 18:53:59.894091  181335 config.go:182] Loaded profile config "scheduled-stop-179184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:53:59.894200  181335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/scheduled-stop-179184/config.json ...
	I1225 18:53:59.894386  181335 mustload.go:66] Loading cluster: scheduled-stop-179184
	I1225 18:53:59.894490  181335 config.go:182] Loaded profile config "scheduled-stop-179184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-179184 -n scheduled-stop-179184
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179184 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1225 18:54:00.276221  181484 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:54:00.276445  181484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:54:00.276453  181484 out.go:374] Setting ErrFile to fd 2...
	I1225 18:54:00.276458  181484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:54:00.276645  181484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:54:00.276868  181484 out.go:368] Setting JSON to false
	I1225 18:54:00.277061  181484 daemonize_unix.go:73] killing process 181369 as it is an old scheduled stop
	I1225 18:54:00.277157  181484 mustload.go:66] Loading cluster: scheduled-stop-179184
	I1225 18:54:00.277488  181484 config.go:182] Loaded profile config "scheduled-stop-179184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:54:00.277551  181484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/scheduled-stop-179184/config.json ...
	I1225 18:54:00.277721  181484 mustload.go:66] Loading cluster: scheduled-stop-179184
	I1225 18:54:00.277811  181484 config.go:182] Loaded profile config "scheduled-stop-179184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1225 18:54:00.282717    9112 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/scheduled-stop-179184/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179184 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179184 -n scheduled-stop-179184
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179184
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179184 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1225 18:54:26.163408  182208 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:54:26.163659  182208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:54:26.163668  182208 out.go:374] Setting ErrFile to fd 2...
	I1225 18:54:26.163672  182208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:54:26.163841  182208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:54:26.164074  182208 out.go:368] Setting JSON to false
	I1225 18:54:26.164151  182208 mustload.go:66] Loading cluster: scheduled-stop-179184
	I1225 18:54:26.164431  182208 config.go:182] Loaded profile config "scheduled-stop-179184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:54:26.164502  182208 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/scheduled-stop-179184/config.json ...
	I1225 18:54:26.164682  182208 mustload.go:66] Loading cluster: scheduled-stop-179184
	I1225 18:54:26.164776  182208 config.go:182] Loaded profile config "scheduled-stop-179184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1225 18:54:40.369941    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179184
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-179184: exit status 7 (75.523791ms)

                                                
                                                
-- stdout --
	scheduled-stop-179184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179184 -n scheduled-stop-179184
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179184 -n scheduled-stop-179184: exit status 7 (76.296349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-179184" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-179184
E1225 18:55:15.466029    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-179184: (4.198063468s)
--- PASS: TestScheduledStopUnix (95.35s)

                                                
                                    
x
+
TestInsufficientStorage (8.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-793997 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-793997 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.191989006s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"772d6314-e61c-4ec8-bb5e-96897de335d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-793997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc40df98-36d4-4bc6-88cc-976a1b05a232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22301"}}
	{"specversion":"1.0","id":"bef17f4d-fd8a-467a-9120-c4ee633214e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c1fb52b-eadb-443d-ac5b-34f69b6debe0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig"}}
	{"specversion":"1.0","id":"97c909a7-4585-4498-8a7c-a9ed3e1e5a60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube"}}
	{"specversion":"1.0","id":"d83034b8-f33f-4870-bb24-b7a8bb78852f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8246dbf3-52d3-4cfd-9181-b145010df747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"747d5fe5-11bd-4fee-bd37-3ae9f95b920c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e9774d65-2b38-4f1e-8414-2843462708e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"499c30db-a82b-4364-b7db-cc4d26b6651f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"837f2cfc-aa8a-4384-92d1-49d3048a6d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c46169de-fcb3-4a88-993c-7a809f858024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-793997\" primary control-plane node in \"insufficient-storage-793997\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"de61e030-5bd5-4d16-b94d-4083e03e3fa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e091eec-c81e-468d-a952-83e759588007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"962247f0-9e76-41de-b8ab-97f610e4da41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-793997 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-793997 --output=json --layout=cluster: exit status 7 (280.185849ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-793997","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-793997","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 18:55:21.978400  184730 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-793997" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-793997 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-793997 --output=json --layout=cluster: exit status 7 (275.297495ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-793997","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-793997","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1225 18:55:22.254461  184840 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-793997" does not appear in /home/jenkins/minikube-integration/22301-5579/kubeconfig
	E1225 18:55:22.264566  184840 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/insufficient-storage-793997/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-793997" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-793997
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-793997: (1.894302439s)
--- PASS: TestInsufficientStorage (8.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (294.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4231257582 start -p running-upgrade-861192 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4231257582 start -p running-upgrade-861192 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.310230971s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-861192 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-861192 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.91338314s)
helpers_test.go:176: Cleaning up "running-upgrade-861192" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-861192
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-861192: (3.509940007s)
--- PASS: TestRunningBinaryUpgrade (294.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (329.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.339863264s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-498224 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-498224 --alsologtostderr: (1.940386975s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-498224 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-498224 status --format={{.Host}}: exit status 7 (81.728018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m55.05286399s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-498224 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (95.200016ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-498224] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-498224
	    minikube start -p kubernetes-upgrade-498224 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4982242 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-498224 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-498224 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.95238926s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-498224" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-498224
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-498224: (3.197396561s)
--- PASS: TestKubernetesUpgrade (329.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (65.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.445613248 start -p missing-upgrade-122711 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.445613248 start -p missing-upgrade-122711 --memory=3072 --driver=docker  --container-runtime=crio: (20.506313217s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-122711
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-122711: (1.729232937s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-122711
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-122711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-122711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.289374762s)
helpers_test.go:176: Cleaning up "missing-upgrade-122711" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-122711
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-122711: (1.99922324s)
--- PASS: TestMissingContainerUpgrade (65.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestPause/serial/Start (51.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-720311 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-720311 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.769544835s)
--- PASS: TestPause/serial/Start (51.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-904366 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-904366 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (92.653923ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-904366] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-904366 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-904366 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.406801241s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-904366 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2933028654 start -p stopped-upgrade-746190 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2933028654 start -p stopped-upgrade-746190 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.687116733s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2933028654 -p stopped-upgrade-746190 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2933028654 -p stopped-upgrade-746190 stop: (1.854292878s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-746190 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-746190 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.730948417s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-904366 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-904366 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.324947807s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-904366 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-904366 status -o json: exit status 2 (320.354027ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-904366","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-904366
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-904366: (2.086643243s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-720311 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-720311 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.886634087s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-910464 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-910464 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (171.082047ms)

                                                
                                                
-- stdout --
	* [false-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1225 18:56:23.165141  200995 out.go:360] Setting OutFile to fd 1 ...
	I1225 18:56:23.165237  200995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:23.165242  200995 out.go:374] Setting ErrFile to fd 2...
	I1225 18:56:23.165246  200995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1225 18:56:23.165421  200995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-5579/.minikube/bin
	I1225 18:56:23.165876  200995 out.go:368] Setting JSON to false
	I1225 18:56:23.167137  200995 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2331,"bootTime":1766686652,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1225 18:56:23.167191  200995 start.go:143] virtualization: kvm guest
	I1225 18:56:23.169015  200995 out.go:179] * [false-910464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1225 18:56:23.170107  200995 out.go:179]   - MINIKUBE_LOCATION=22301
	I1225 18:56:23.170111  200995 notify.go:221] Checking for updates...
	I1225 18:56:23.172138  200995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1225 18:56:23.173213  200995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-5579/kubeconfig
	I1225 18:56:23.177121  200995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-5579/.minikube
	I1225 18:56:23.178308  200995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1225 18:56:23.179464  200995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1225 18:56:23.181090  200995 config.go:182] Loaded profile config "NoKubernetes-904366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1225 18:56:23.181290  200995 config.go:182] Loaded profile config "pause-720311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1225 18:56:23.181405  200995 config.go:182] Loaded profile config "stopped-upgrade-746190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1225 18:56:23.181510  200995 driver.go:422] Setting default libvirt URI to qemu:///system
	I1225 18:56:23.205227  200995 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1225 18:56:23.205354  200995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1225 18:56:23.267258  200995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-25 18:56:23.252824051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1225 18:56:23.267411  200995 docker.go:319] overlay module found
	I1225 18:56:23.269413  200995 out.go:179] * Using the docker driver based on user configuration
	I1225 18:56:23.270469  200995 start.go:309] selected driver: docker
	I1225 18:56:23.270483  200995 start.go:928] validating driver "docker" against <nil>
	I1225 18:56:23.270494  200995 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1225 18:56:23.275372  200995 out.go:203] 
	W1225 18:56:23.276463  200995 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1225 18:56:23.277491  200995 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-910464 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-910464" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-904366
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-720311
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-746190
contexts:
- context:
cluster: NoKubernetes-904366
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-904366
name: NoKubernetes-904366
- context:
cluster: pause-720311
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-720311
name: pause-720311
- context:
cluster: stopped-upgrade-746190
user: stopped-upgrade-746190
name: stopped-upgrade-746190
current-context: pause-720311
kind: Config
users:
- name: NoKubernetes-904366
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/NoKubernetes-904366/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/NoKubernetes-904366/client.key
- name: pause-720311
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/pause-720311/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/pause-720311/client.key
- name: stopped-upgrade-746190
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/stopped-upgrade-746190/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/stopped-upgrade-746190/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-910464

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-910464"

                                                
                                                
----------------------- debugLogs end: false-910464 [took: 3.477733727s] --------------------------------
helpers_test.go:176: Cleaning up "false-910464" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-910464
--- PASS: TestNetworkPlugins/group/false (3.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-904366 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-904366 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.639450058s)
--- PASS: TestNoKubernetes/serial/Start (9.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22301-5579/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-904366 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-904366 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.282122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.436466007s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-904366
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-904366: (1.291694909s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-904366 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-904366 --driver=docker  --container-runtime=crio: (6.713493664s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-904366 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-904366 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.623639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (55.97s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-632730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1225 18:58:52.419572    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-984202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-632730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (49.202417691s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-632730 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-632730
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-632730: (6.22119555s)
--- PASS: TestPreload/Start-NoPreload-PullImage (55.97s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (51.2s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-632730 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1225 18:59:40.369755    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/addons-335994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-632730 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.970760823s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-632730 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (51.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-746190
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-746190: (1.00280492s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.852980134s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1225 19:01:20.320915    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (50.207128718s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-163446 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d7ba23a7-2bd3-4170-952b-a664e8b82355] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d7ba23a7-2bd3-4170-952b-a664e8b82355] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003648948s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-163446 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (41.852218945s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-163446 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-163446 --alsologtostderr -v=3: (16.136692134s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446: exit status 7 (88.119653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-163446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-163446 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.119808707s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163446 -n old-k8s-version-163446
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-148352 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cdb08b45-a83a-46fd-8df3-e2adf0b2917e] Pending
helpers_test.go:353: "busybox" [cdb08b45-a83a-46fd-8df3-e2adf0b2917e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [cdb08b45-a83a-46fd-8df3-e2adf0b2917e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003380958s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-148352 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-148352 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-148352 --alsologtostderr -v=3: (16.720510004s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-684693 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f8cdecb5-792b-4f73-bbd6-1c06cdaeb7bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f8cdecb5-792b-4f73-bbd6-1c06cdaeb7bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003661083s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-684693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-684693 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-684693 --alsologtostderr -v=3: (18.158511049s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352: exit status 7 (77.972112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-148352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-148352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (49.344674812s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-148352 -n no-preload-148352
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693: exit status 7 (77.217746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-684693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-684693 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (48.867933926s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-684693 -n embed-certs-684693
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9sffb" [8670172f-1b60-424f-b7a5-cf89fb165120] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003968433s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-9sffb" [8670172f-1b60-424f-b7a5-cf89fb165120] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004327849s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-163446 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163446 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (38.139351714s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-5ngsn" [3949581b-0929-46a9-830c-23b0babb1c19] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003452444s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-5ngsn" [3949581b-0929-46a9-830c-23b0babb1c19] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00404344s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-148352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-148352 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xv29k" [22cdf105-cc29-4664-bb39-988c3cbbed55] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00461169s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xv29k" [22cdf105-cc29-4664-bb39-988c3cbbed55] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004665957s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-684693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (23.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (23.509138778s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (23.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-684693 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0defcc3b-45da-4e19-8614-16aacfe1ebfd] Pending
helpers_test.go:353: "busybox" [0defcc3b-45da-4e19-8614-16aacfe1ebfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0defcc3b-45da-4e19-8614-16aacfe1ebfd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004105937s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.260113018s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-960022 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-960022 --alsologtostderr -v=3: (18.226948612s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-731832 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-731832 --alsologtostderr -v=3: (8.079128099s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832: exit status 7 (78.670227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-731832 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-731832 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (10.383525265s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-731832 -n newest-cni-731832
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022: exit status 7 (83.617795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-960022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-960022 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (45.737624121s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-960022 -n default-k8s-diff-port-960022
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-731832 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-910464 "pgrep -a kubelet"
I1225 19:04:23.592358    9112 config.go:182] Loaded profile config "auto-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-m2bmk" [31ecd731-8291-438a-a5fd-18c9d0dc5afa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-m2bmk" [31ecd731-8291-438a-a5fd-18c9d0dc5afa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.102220516s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.123728224s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.242380312s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hm5lx" [877f70b3-c96c-4876-8dbe-f0ad7d7e0a01] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003537802s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-hm5lx" [877f70b3-c96c-4876-8dbe-f0ad7d7e0a01] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003653674s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-960022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-960022 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-hsfxd" [c2a15ba2-8a5a-4895-8e79-bfb006e2ad60] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004258759s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-910464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gtj8h" [00f79e6e-d079-4ab4-ae58-59470bafd9fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gtj8h" [00f79e6e-d079-4ab4-ae58-59470bafd9fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003836733s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.343894427s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-pd4sw" [5eaacf5e-82ac-470f-a567-3dc9f9bad8b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004632152s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.493295265s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.201198265s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-910464 "pgrep -a kubelet"
I1225 19:05:49.523776    9112 config.go:182] Loaded profile config "calico-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2tqvp" [ad7ffcdc-5cf1-46d8-8465-53fa84b6dc0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2tqvp" [ad7ffcdc-5cf1-46d8-8465-53fa84b6dc0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004303237s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-910464 "pgrep -a kubelet"
I1225 19:06:08.610880    9112 config.go:182] Loaded profile config "custom-flannel-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t72d9" [5766e40f-7cb5-4179-86c1-9df610005ac4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-t72d9" [5766e40f-7cb5-4179-86c1-9df610005ac4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004919023s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1225 19:06:24.455841    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:06:25.096993    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:06:26.377533    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:06:28.938298    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:06:34.058779    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-910464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.617500861s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-zx7r9" [bdf07fd5-a630-4e3b-b771-87da2030e5be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003907567s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.05s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-840598 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-840598 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.801847386s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-840598" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-840598
--- PASS: TestPreload/PreloadSrc/gcs (4.05s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.15s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-584772 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-584772 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.946685145s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-584772" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-584772
--- PASS: TestPreload/PreloadSrc/github (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-910464 "pgrep -a kubelet"
I1225 19:06:43.593994    9112 config.go:182] Loaded profile config "flannel-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mmsbj" [c5f76ea4-0933-4b89-b5f2-38c60a3e21a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1225 19:06:44.299042    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/old-k8s-version-163446/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-mmsbj" [c5f76ea4-0933-4b89-b5f2-38c60a3e21a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.002560133s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.47s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-608153 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-608153" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-608153
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-910464 "pgrep -a kubelet"
I1225 19:06:52.683546    9112 config.go:182] Loaded profile config "enable-default-cni-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-559sw" [6a84c15c-90bf-4192-a6c3-d897353ffd7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-559sw" [6a84c15c-90bf-4192-a6c3-d897353ffd7a] Running
E1225 19:06:55.989829    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:06:56.630323    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:06:57.910884    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1225 19:07:00.471771    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003832463s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-910464 "pgrep -a kubelet"
I1225 19:07:33.100119    9112 config.go:182] Loaded profile config "bridge-910464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-910464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mrm4d" [41f9b6cc-75b5-4daa-95ba-3a70ca2e9269] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mrm4d" [41f9b6cc-75b5-4daa-95ba-3a70ca2e9269] Running
E1225 19:07:36.312828    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/no-preload-148352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003975108s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-910464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-910464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (34/419)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
375 TestStartStop/group/disable-driver-mounts 0.19
379 TestNetworkPlugins/group/kubenet 3.28
388 TestNetworkPlugins/group/cilium 4.12
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-102827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-102827
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1225 18:56:20.320953    9112 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/functional-969923/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: kubenet-910464 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-910464" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-904366
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-720311
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-746190
contexts:
- context:
cluster: NoKubernetes-904366
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-904366
name: NoKubernetes-904366
- context:
cluster: pause-720311
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-720311
name: pause-720311
- context:
cluster: stopped-upgrade-746190
user: stopped-upgrade-746190
name: stopped-upgrade-746190
current-context: pause-720311
kind: Config
users:
- name: NoKubernetes-904366
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/NoKubernetes-904366/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/NoKubernetes-904366/client.key
- name: pause-720311
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/pause-720311/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/pause-720311/client.key
- name: stopped-upgrade-746190
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/stopped-upgrade-746190/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/stopped-upgrade-746190/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-910464

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-910464"

                                                
                                                
----------------------- debugLogs end: kubenet-910464 [took: 3.120399061s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-910464" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-910464
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-910464 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-910464" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-720311
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-5579/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-746190
contexts:
- context:
cluster: pause-720311
extensions:
- extension:
last-update: Thu, 25 Dec 2025 18:56:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-720311
name: pause-720311
- context:
cluster: stopped-upgrade-746190
user: stopped-upgrade-746190
name: stopped-upgrade-746190
current-context: pause-720311
kind: Config
users:
- name: pause-720311
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/pause-720311/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/pause-720311/client.key
- name: stopped-upgrade-746190
user:
client-certificate: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/stopped-upgrade-746190/client.crt
client-key: /home/jenkins/minikube-integration/22301-5579/.minikube/profiles/stopped-upgrade-746190/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-910464

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-910464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-910464"

                                                
                                                
----------------------- debugLogs end: cilium-910464 [took: 3.938911694s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-910464" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-910464
--- SKIP: TestNetworkPlugins/group/cilium (4.12s)

                                                
                                    
Copied to clipboard